Philosophical Landmines
Related: Cached Thoughts
Last summer I was talking to my sister about something. I don’t remember the details, but I invoked the concept of “truth”, or “reality” or some such. She immediately spit out a cached reply along the lines of “But how can you really say what’s true?”.
Of course I’d learned some great replies to that sort of question right here on LW, so I did my best to sort her out, but everything I said invoked more confused slogans and cached thoughts. I realized the battle was lost. Worse, I realized she’d stopped thinking. Later, I realized I’d stopped thinking too.
I went away and formulated the concept of a “Philosophical Landmine”.
I used to occasionally remark that if you care about what happens, you should think about what will happen as a result of possible actions. This is basically a slam dunk in everyday practical rationality, except that I would sometimes describe it as “consequentialism”.
The predictable consequence of this sort of statement is that someone starts going off about hospitals and terrorists and organs and moral philosophy and consent and rights and so on. This may be controversial, but I would say that causing this tangent constitutes a failure to communicate the point. Instead of prompting someone to think, I invoked some irrelevant philosophical cruft. The discussion is now about Consequentialism, the Capitalized Moral Theory, instead of the simple idea of thinking through consequences as an everyday heuristic.
It’s not even that my statement relied on a misused term or something; it’s that an unimportant choice of terminology dragged the whole conversation in an irrelevant and useless direction.
That is, “consequentialism” was a Philosophical Landmine.
In the course of normal conversation, you passed through an ordinary spot that happened to conceal the dangerous leftovers of past memetic wars. As a result, an intelligent and reasonable human was reduced to a mindless zombie chanting prerecorded slogans. If you’re lucky, that’s all. If not, you start chanting counter-slogans and the whole thing goes supercritical.
It’s usually not so bad, and no one is literally “chanting slogans”. There may even be some original phrasings involved. But the conversation has been derailed.
So how do these “philosophical landmine” things work?
It looks like when a lot has been said on a confusing topic, usually something in philosophy, there is a large complex of slogans and counter-slogans installed as cached thoughts around it. Certain words or concepts will trigger these cached thoughts, and any attempt to mitigate the damage will trigger more of them. Of course they will also trigger cached thoughts in other people, which in turn… The result being that the conversation rapidly diverges from the original point to some useless yet heavily discussed attractor.
Notice that whether a particular concept will cause trouble depends on the person as well as the concept. Notice further that this implies that the probability of hitting a landmine scales with the number of people involved and the topic-breadth of the conversation.
Anyone who hangs out on 4chan can confirm that this is the approximate shape of most thread derailments.
Most concepts in philosophy and metaphysics are landmines for many people. The phenomenon also occurs in politics and other tribal/ideological disputes. The ones I’m particularly interested in are the ones in philosophy, but it might be useful to divorce the concept of “conceptual landmines” from philosophy in particular.
Here’s some common ones in philosophy:
Morality
Consequentialism
Truth
Reality
Consciousness
Rationality
Quantum
Landmines in a topic make it really hard to discuss ideas or do work in these fields, because chances are, someone is going to step on one, and then there will be a big noisy mess that interferes with the rather delicate business of thinking carefully about confusing ideas.
My purpose in bringing this up is mostly to precipitate some terminology and a concept around this phenomenon, so that we can talk about it and refer to it. It is important for concepts to have verbal handles, you see.
That said, I’ll finish with a few words about what we can do about it. There are two major forks of the anti-landmine strategy: avoidance, and damage control.
Avoiding landmines is your job. If it is a predictable consequence that something you could say will put people in mindless slogan-playback-mode, don’t say it. If something you say makes people go off on a spiral of bad philosophy, don’t get annoyed with them, just fix what you say. This is just being a communications consequentialist. Figure out which concepts are landmines for which people, and step around them, or use alternate terminology with fewer problematic connotations.
If it happens, which it does, as far as I can tell, my only effective damage control strategy is to abort the conversation. I’ll probably think that I can take those stupid ideas here and now, but that’s just the landmine trying to go supercritical. Just say no. Of course letting on that you think you’ve stepped on a landmine is probably incredibly rude; keep it to yourself. Subtly change the subject or rephrase your original point without the problematic concepts or something.
A third prong could be playing “philosophical bomb squad”, which means permanently defusing landmines by supplying satisfactory nonconfusing explanations of things without causing too many explosions in the process. Needless to say, this is quite hard. I think we do a pretty good job of it here at LW, but for topics and people not yet defused, avoid and abort.
ADDENDUM: Since I didn’t make it very obvious, it’s worth noting that this happens with rationalists, too, even on this very forum. It is your responsibility not to contain landmines as well as not to step on them. But you’re already trying to do that, so I don’t emphasize it as much as not stepping on them.
- Morality is Awesome by 6 Jan 2013 15:21 UTC; 146 points) (
- The “you-can-just” alarm by 8 Oct 2022 10:43 UTC; 77 points) (
- Meetup : Toronto—Rational Debate: Will Rationality Make You Rich? … and other topics by 11 Feb 2013 1:12 UTC; 10 points) (
- Utility Quilting by 7 Apr 2013 23:48 UTC; 10 points) (
- LW4EA: Philosophical Landmines by 26 Apr 2022 2:53 UTC; 9 points) (EA Forum;
- 9 Feb 2013 0:38 UTC; 9 points) 's comment on A brief history of ethically concerned scientists by (
- 29 Jun 2019 22:19 UTC; 6 points) 's comment on Demon Threads by (
- Meetup : Durham NC meetup: Defense Against the Dumb Arts by 13 Mar 2013 4:37 UTC; 4 points) (
- Meetup : Buffalo Meetup at UB by 2 Apr 2013 20:10 UTC; 3 points) (
- 1 Mar 2015 17:44 UTC; 1 point) 's comment on March Open Thread by (EA Forum;
- 13 Feb 2013 6:11 UTC; 1 point) 's comment on Morality is Awesome by (
- 12 Feb 2013 11:20 UTC; 1 point) 's comment on Philosophical Landmines by (
I love the landmine metaphor—it blows up in your face and it’s left over from some ancient war.
Not always from some ancient war.
This is quite true.
This is insightful. I also think we should emphasize that it is not just other people or silly theistic, epistemic relativists who don’t read Less Wrong who can get exploded by Philosophical Landmines. These things are epistemically neutral and the best philosophy in the world can still become slogans if it gets discussed too much E.g.
Now I wasn’t there and I don’t know you. But it seems at least plausible that that is exactly what your sister felt she was doing. That this is how having your philosophical limbs getting blown off feels like from the inside.
I think I see this phenomena most with activist-atheists who show up everywhere prepared to categorize any argument a theist might make and then give a stock response to it. It’s related to arguments as soldiers. In addition to avoiding and disarming landmines, I think there is a lot to be said for trying to develop an immunity. So that even if other people start tossing out slogans you don’t. I propose that it is good policy to provisionally accept your opponent’s claims and then let your own arguments do their work on those claims in your mind before you let them out.
So...
Theist: “The universe is too complex for it to have been created randomly.”
Atheist (pattern matches this claim to one she has heard a hundred times before, searches for the most relevant reply, outputs): “Natural selection isn’t random and in that case how was God created?”
KABOOM!
Instead:
Theist: “The universe is too complex for it to have been created randomly.”
Atheist (Entertains the argument as if she had no prior experience with it, see’s what makes the argument persuasive for some people, then searches for replies and applies them to the argument “Is natural selection really random? Oh, and God, to the extent He is supposed to be like a human agent, would be really complicated too. So that just pushes the problem of developing complexity back a step.”): “Oh yeah, I’ve heard things like that before. Here are two issues with it....”
Obviously this is hard to do, and maybe not worthwhile in every situation.
“Hmm … what do you mean by ‘randomly’?”
YES. The most effective general tactic in religious debates is to find out what exactly the other guy is trying to say, and direct your questions at revealing what you suspect are weak points in the argument. Most of this stuff has a tendency to collapse on its own if you poke it hard enough—and nobody will be able to accuse you of making strawman arguments, or not listening.
Of course that goes for all debates, not just religious ones.
That one makes me crazy. How can anyone think they know enough about universes to make a strong claim about what sorts of universes are likely?
I don’t understand how the Atheist gets from the Theist’s claims about the creation of the universe to “natural selection”. I thought that was the bad pattern-matching in the first example, but then they make the same mistake in the second example. Does the Atheist think the universe is an evolved creature?
The atheist doesn’t think the universe evolved; he thinks the complex things in the universe evolved. The theist I was modeling is thinking of -say- the human eyeball when he thinks about the complexity of the universe. But I agree the shorthand dialogue is ambiguous.
Some complex things in the universe evolved. But plenty didn’t. The ‘fine-tuning’ of physical constants arguments is quite fashionable at the moment.
Is interesting how easy it is to project a certain assumption onto arguments, though. Because your atheist response assumed natural selection, while a response above protests that the theist hasn’t experienced enough universes to generalise about them, so presumably interprets the statement to refer to the universe as a whole. All the more reason to make sure you understand what people mean, I suppose!
Yep. Fixed that to be more clear.
Also added an addendum reflecting your first point (not just irrationalists).
You’re right. Exactly.
Unless there are on order of $2^KolmogorovComplexity(Universe)$ universes, the chance of it being constructed randomly is exceedingly low.
Please, do continue.
An extremely low probability of the observation under some theory is not itself evidence. It’s extremely unlikely that the I would randomly come up with the number 0.0135814709894468, and yet I did.
It’s only interesting if there is some other possibility that assigns a different probability to that outcome.
I can’t assign any remarkable property to the number you came up with, not even the fact that I came up with it myself earlier (I didn’t). So of course I’m not surprised that you came up with it.
Our universe, on the other hand, supports sentient life. That’s quite a remarkable property, which presumably spans only a tiny fraction of all “possible universes” (whatever that means). Then, under the assumption our universe is the only one, and was created randomly, then we’re incredibly lucky to exist at all. So, there is a good deal of improbability to explain. And God just sounds perfect.
Worse, I actually think God is a pretty good explanation until you know about evolution. And Newton, to hint at the universality of simple laws of physics. And probably more, since we still don’t know for instance how intelligence and conciousness actually work —though we do have strong hints. And multiverses, and anthropic reasoning and…
Alas, those are long-winded arguments. I think the outside view is more accessible: technology accomplishes more and more impressive miracles, and it doesn’t run on God (it’s all about mechanisms, not phlebotinum). But I don’t feel this is the strongest argument, and I wouldn’t try to present it as knock down.
Now, if I actually argue with a theist, I won’t start by those. First, I’ll try to agree on something. Most likely, non-contradiction and excluded middle. Either there is a God, or there is none, and either way, one of us is mistaken. If we can’t agree with even that (happens with some agnostics), I just give up. (No, wait, I don’t.)
Or the two of you mean different things by “a God”, or even by “there is”.
First things first.
Make sure everyone accept the non-contradiction principle and mark the ones that don’t as “beyond therapy” (or patiently explain that “truth” business to them).
Make sure everyone use the same definitions for the subject at hand.
My limited experience tells me that most theists will readily accept point one, believing that there is a God, and I’m mistaken to believe otherwise. I praise them for their coherence, then move on to what we actually mean by “God” and such.
EDIT: I may need to be more explicit: I think the non-contradiction principle is more fundamental than the possible existence of a God, and as such should be settled first. Settling the definition for any particular question always come second. It only seems to come first in most discussions because the non-contradiction principle is generally common knowledge.
Also, I do accept there is a good chance the theist and I do not put the same meanings under the same words. It’s just simpler to assume we do when using them as an example for discussing non-contradiction: we don’t need to use the actual meanings yet. (Well, with one exception: I assume “there is a God” and “there is no God” are mutually contradictory for any reasonable meaning we could possibly put behind those words.)
What exactly do you mean by “good explanation”?
Probable, given the background information at the time.
Before Darwin, remember that the only known powerful optimizations processes where sentient beings. Clearly, Nature is the result of an optimization process (it’s not random by a long shot), and a very very powerful one. It’s only natural to think it is sentient as well.
That is, until Darwin showed how a mindless process, very simple at it’s core, can do the work.
I’ve come across this arguments before—I think Dawkins makes it. It shows a certain level of civilised recognition of how you might have another view. But I think there’s also a risk that because we do have Darwin, we’re quick to just accept that he’s the reason why Creation isn’t a good explanation. I actually think Hume deconstructed the argument very well in his Dialogue Concerning Natural Religion.
Explaining complexity through God suffers from various questions 1) what sort of God (Hume suggests a coalition of Gods, warring Gods, a young or old and senile God as possibilities) 2) what other things seem to often lead to complexity (the universe as an organism, essentially) 3) the potential of sheer magnitude of space and time to allow pockets of apparent order to arise
Whose answers tend to just be “Poof Magic”. While I do have a problem with “Poof Magic”, I can’t explain it away without quite deep scientific arguments. And “Poof Magic”, while unsatisfactory to any properly curious mind, have no complexity problem.
Now that I think of it, I may have to qualify the argument I made above. I didn’t know about Hume, so maybe the God Hypothesis wasn’t so good even before Newton and Darwin after all. At least assuming the background knowledge available to the best thinkers of the time.
The laypeople, however, may not have had a choice but to believe in some God. I mean, I doubt there was some simple argument they could understand (and believe) at the time. Now, with the miracles of technology, I think it’s much easier.
Have you ever come up with what appeared (to you) to be a knock-down argument for your position; but only after it was too late to use it in whatever discussion prompted the thought? Have you ever carried that argument around in the back of your skull, just waiting for the topic to come up again so you can deploy your secret weapon?
I have. Hypothesis: At least some of the landmines you describe stem from engaging someone on a subject where they still have such arguments on their mental stack, waiting not for confirmation or refutation but just for an opportunity to use them.
The French, unsurprisingly, have an idiom for this.
The English have a word for that.
I was aware of that and was waiting for an excuse to use the idea.
So… Many… Times...
Of course, since you’re not the person who actually believes in the position you’re trying to refute, it’s much harder to judge what would actually seem like a knockdown argument to someone who does believe it.
I’ve stood by watching lots of arguments where people hammered on the same points over and over in frustration, “how can you not see this is irrefutable!?” that I had long ago filed away for future notice as Not Convincing.
If I ever had this tendency it was knocked out of me by studying mathematics. I recommend this as a way of vastly increasing your standard for what constitutes an irrefutable argument.
I’m not even an amateur at math, but yes yes yes. Doing a few proofs and derivations and seeing the difference between math-powered arguments and not is huge.
This has often happened to me. Whenever it happens, I immediately steer the direction of the conversation back to the relevant discussion again, instead of just waiting passively for it to arise in the future.
“Well, that’s a pretty controversial topic and I’d rather our conversation not get derailed.”
Sometimes you can tell people the truth if you’re respectful and understated.
I find it annoying when people say that sort of thing. I want to respond (though don’t, because it’s usually useless) along the lines of:
“Yes, it is a controversial topic. It’s also directly relevant to the thing we are discussing. If we don’t discuss this controversial thing, or at least figure out where we each stand on it, then our conversation can go no further; discussing the controversial thing is not a derailment, it’s a necessary precondition for continuing to have the conversation that we’ve been having.
… and you probably knew that. What you probably want isn’t to avoid discussing the controversial thing, but rather you want for us both to just take your position on the controversial thing as given, and proceed with our conversation from there. Well, that’s not very respectful to someone who might disagree with you.”
Fair enough.
One way to deal with this issue might be to make it clear that the discussion proceeds conditional on the correctness of their position on the controversial thing. Either side could do this.
I think that only works if you say “even if that were true, which we don’t need to discuss now, I would argue that...” It’s much harder to get someone to accept “for the sake of argument” something they strongly disagree with.
For example, I would only accept “morality comes from the Bible” if I had a convincing Bible quote to make my point.
My ideal approach (not that I always do so) is more to stop taking sides and talk about the plus and minuses of each side. My position on a lot of subjects does boil down to “it’s really complicated, but here are some interesting things that can be said on that topic”. I don’t remember having problems being bombarded with slogans (well—except with a creationist once) since usually my answer is on the lines of “eh, maybe”. (this applies particularly to consequentialism and deontology, morality, rationality, and QM).
I also tend to put the emphasis more on figuring out whether there is actually a substantial disagreement, or just different use of terms (for things like “truth” or “reality”).
I use a similar approach, and it usually works. Make clear that you don’t think you’re holding the truth on the subject, whatever it means, but only that the information in your possess led you to the conclusion youre presenting. Manifest curiosity for the position the other side is expressing, even if it’s cached, and even if you heard similar versions of it a hundred times before. Try to drive them out of cached mode by asking questions and forcing them to think for themselves outside of their preconstructed box. Most of all, be polite, and point out that it’s fine to have some disagreement, since the issue is complictaed, and that you’re really interested in sorting it out.
This is especially important in the particular case mentioned in the OP—it’s easy to jump from seeing someone saying that the truth exists to thinking that they think they have that truth.
Right. I suppose refusing to chant slogans is a good way to avoid the supercritical slogan ping-pong.
I lean towards abort mostly out of learned helplessness, and because if I think about it soberly enough to strategize, I see little value in any strategy that involves sorting out someone’s confusion instead of disengaging. Just my personal approach, of course.
Thanks for bringing up alternate damage control procedures; we’ll need as many as we can get.
Even that can fail, especially with “truth” and “reality”. I have a concept behind “truth”, and I’m sometimes not even allowed to use it, because the term “truth” has been hijacked to mean “really strong belief” instead (sometimes explicitly so, when I ask). I do try to get them to use “belief”, and let me use “truth” (or “correctness”) my way, but it’s no use. I believe their use of a single word for two concepts mixed them up, and I never managed to separate them back.
Also, they know that if they let me define words the way I want, I’ll just win whatever argument we’re having. It only convince them that my terminology must somehow be wrong.
Finally, there is also a moral battle here: the very idea of absolute, inescapable truth whether you like it or not, reeks of dogmatism. As history showed us countless times, dogmatism is Baad™. (The Godwin point is really a fixpoint.)
Maybe I have a different strategy in mind to you, but when I try this, I just end up sitting there trying ineffectually to have an actual conversation while they spew slogans at me.
Taking the outside view, what distinguishes your approach from hers?
nyan’s cached thoughts are better? I mean, it would be nice if we could stop relying on cached thoughts entirely, but there’s just not enough time in the day. A more practical solution is to tend your garden of cached thoughts as best you can. The problem is not just that she has cached thoughts but that her garden is full of weeds she hasn’t noticed.
.
“Outside view” refers to some threshold of reliability in the details that you keep in a description of a situation. If you throw out all relevant detail, “outside view” won’t be able to tell you anything. If you keep too much detail, “outside view” won’t be different from the inside view (i.e. normal evaluation of the situation that doesn’t invoke this tool). Thus, the decision about which details to keep is important and often non-trivial, in which case simply appealing to something not being “outside view” is not helpful.
I took it to be obvious that “taking the outside view” in the grandparent comment meant dropping the detail that ‘Nyan and everyone here thinks his cached thoughts are better than his sisters’ and so Qiaochu_Yuan’s reply was not answering the question.
I have inside view reasons to believe that nyan’s cached thoughts are genuinely better. If the point you’re trying to make is “repeating cached thoughts is in general not a productive way to have arguments, and you should assume you’re doing this by default,” I agree but don’t think that the outside view is a strong argument supporting this conclusion. And I still think that cached thoughts can be useful. For example, having a cached thought about cached thoughts can be useful.
I agree with all that: my point was just that the question you were replying to asked about the outside view (which in this context I took to mean excluding the fact that we think our cluster of ideas is better than Nyan’s sister’s cluster of ideas). I’m just saying: rationalists can get exploded by philosophical landmines too and it seems worthwhile to be able to avoid that when we want to even though our philosophical limbs are less wrong than most people’s.
Or to put it another way: philosophical landmines seem like a problem for self-skepticism because they keep you from hearing and responding adequately to the concerns of others. So any account of philosophical landmines ought to be neutral on the epistemic content of sloganeering since assuming we’re on the right side of the argument is really bad for self-skepticism.
Um, I just wanted to parachute in and say that “cached thought” should not be a completely general objection to any argument. All it means is that you’ve thought about it before and you’re not, as it were, recomputing everything from scratch in real time. There is nothing epistemically suspect about nyan_sandwich not re-deriving their entire worldview on the spot during every conversation.
Cached thoughts can be correct! The germane objection to them comes when you never actually, personally thought the thought that was cached (it was just handed to you by your environment), or you thought it a long time ago and you need to reconsider the issue.
“Cached thought!” doesn’t refute the thought. But OP suggests that persuading his sister of something true was made more difficult by the her cached thoughts on the subject. I’ve definitely seen similar things happen before. If that is the case then I’ve probably done it before. I want to be convinced of true things if they are true. So if I have the time and energy it is advantageous to suppress cached thoughts and recompute my answers. Maybe this is only worth doing with certain interlocutors—people with sufficient education, intelligence and thoughtfulness. Or maybe it is worth doing on a long car ride but not in a rapid-flowing conversation at a party.
Part of the problem is that thoughts aren’t tagged with this information. Which is why it helps to recompute often and it is often easier to recompute when someone else is providing counter-arguments.
Not much. It feels reasonable on the inside, but in outside view, everyone is chanting slogans.
That’s what makes the landmine process so nefarious. If only one person is chanting slogans, it’s just a cached thought, but if the response is also a cached thought and causes more, the conversation is derailed and people aren’t thinking anymore. (though as Qiaochu says, using chached thoughts is usually not bad).
Maybe my self deprecation was too subtle? I’m going to fix it, but internet explorer is too shitty to fix it right now.
This happened to me two days ago, in a workshop on inter-professionalism and conflict resolution, with the word “objective”. Which, in hindsight, was kind of obvious, and I should have known to avoid it. The man leading the discussion was a hospital chaplain–a very charismatic man and a great storyteller, but with a predictably different intellectual background than mine. It wouldn’t have been that hard to avoid the word. I was describing what ties different health care professionals together–the fact that, y’know, together they’re taking care of the same patient, who exists as a real entity and whose needs aren’t affected by what the different professionals think they are. I just forgot, for a moment, that people tend to have baggage attached to the word “objective”, since to me it seems like such a non-controversial term. In general, not impressed with myself, since I did know better.
Have mainstream philosophers come up with a solution to that? Can LessWronigans learn from them? Do LessWrongians need to teach them?
Try to come up with the least controversial premise that you can use to support your argument, not just the first belief that comes to mind that will support it.
To use a (topical for me) example. someone at Giving What We Can might support effective charities “because it maximises expected utility so you’re oblidged to do so, duh”, but “if you can do a lot of good for little sacrifice, you should” is a better premise to rely on when talking to people in general, as it’s weaker but still does the job.
Not yet I think. Maybe the OP will help us talk about the problem and come up with a solution?
I would be interested to know if this is a known thing in philosophy.
I thought you were the OP.
Insamuch as my questions were a disguised point, it is that flare-ups are rare in professional philosophy, and that this is probably because PP has a praxis for avoiding them, which is learnt by absorption and not set out explicitly.
Sorry for the ambiguous terminology. OP as in Original Post, not Original Poster.
As a first guess on the philosophy thing, I imagine it is structural and cultural. AFAIK they don’t really have the kinds of discussions that someone can sneak a landmine into. And they seem civilized enough to not go off on tangents of cached thoughts. I just can’t imagine stepping on a philosophical landmine while talking to a professional philosopher. Even if their ideas were wrong, it wouldn’t be like that.
So I wonder what we can learn from them about handling landmines in the field, besides “sweep your area clear of landmines, and don’t go into the field”.
Additional ambiguity of “OP”: does it refer to the top-level comment or to the article the comment is on? Does anyone have a good way to make this clear when using the term?
I’ve never seen that as an additional ambiguity. I’ve always understood “OP” to mean “the original article”, and never “the top level comment”. But maybe this is because I’ve just never encountered the other use (or didn’t notice when someone meant it to refer to the top level comment).
Here’s a heuristic: There are no slam-dunks in philosophy.
Here’s another: Most ideas come in strong, contentious flavours and watered-down, easy-to-defend flavours.
But water things down enough and they are no longer philosophy.
But the Capitalized Moral Theory is what “consequentialism” means. You have invented a severely watered down version “the everyday heuristic”. How could anyone guess that’s what you mean? I wouldn’t have.
Agreed except for this:
In my experience the strong flavors are easier to defend, except that they tend to trip people’s absurdity heuristics.
This can be a substantial problem in psychology, even when the original “memetic war” has real scientific content and consequences. Debates between behaviorism and cognitive psychology (for example) are often very attractive and useless tangents for anyone discussing the validity of clinical treatments from cognitive and/or behavioral traditions, when the empirical outcomes of treatment are the original topic at hand.
I don’t think I understand the problem. You say that “consequentialism is a slam-dunk as far as practical rationality goes,” this comment is interpreted as the claim that “The moral theory called Consequentialism is true,” and the person you’re talking to asks you what you think about some common and relevant objections to that moral theory.
What’s gone wrong? “Consequentialism” is used in philosophy as a term for the view that the morality of an action depends only on the consequences of that action. So I don’t think you can blame your interlocutor for interpreting the term this way and raising relevant objections to this view. You seem to be using the term in an idiosyncratic way—as referring to the view that we should think about the consequences of our actions before performing them (or something like that).
The lesson of your story seems not to be that we should avoid using terms like “consequentialism,” but that we should be careful to explain exactly what we mean by a term if we’re going to be using it in an idiosyncratic way.
Worse, the idea of consequentialism (or “utilitarianism”) is often taken by people with a little philosophy awareness to mean the view that “the ends justify the means” without regard for actual outcomes — that if you mean well and can convince yourself that the consequences of an action will be good, then that action is right for you and nobody has any business criticizing you for taking it.
What this really amounts to is not “the ends justify the means”, of course, but the horrible view that “good intentions justify actions that turn out to be harmful.”
Ambiguity on “ends” probably does some damage here: it originally referred to results but also came to have the sense of purposes, goals, or aims, in other words, intentions.
Thanks for making this obvious. Yes, this is the typical consequentialism strawman, but I did not notice how exactly are the meanings of the words redefined.
I’m confused. Consequentialists do not have access to the actual outcomes when they are making their decisions, so using that as a guideline for making moral decisions is completely unhelpful.
It also seems that your statement that good intentions don’t justify the means is false. Consider this counterexample:
I have 2 choices A and B. Option A produces 2 utilons with 50% probability and −1 utilon the rest of the time. Choice B is just 0 utilons with 100% probability.
My expected utility for option A is +0.5 utilons which is greater than the 0 utilons for option B so I choose option A.
Let’s say I happen to get the −1 utilon in the coin flip.
At this point, it seems the following three things are true:
I had good intentions
I had a bad actual outcome, compared to what would have happened if I had chosen option (or “means”) B
I made the right choice, given the information I had at the time
Therefore, from this example it certainly seems like good intentions do justify the means. But perhaps you meant something else by “good intentions”?
Certainly it is bad to delude yourself, but it seems like that is just a case of having the wrong intentions… that is, you should have intended to prevent yourself from becoming deluded in the first place, proceeding carefully and realizing that as a human you are likely to make errors in judgement.
Plus people often are unaware of moral injunctions, or even consequentialist heuristics, and tend to think of highly disruptive ruthlessly optimizing Grindelwald style consequentialism, or else of a variety of failure modes that seem consequentialistic but not actually good such as wireheading, euthanasia, schemes involving extreme inequality or something, etc.
It doesn’t help that consequentialist care about expected utility, which can be a lot closer to intentions than results.
I rewrote that section to clarify. Does it help?
I wish we had a communal list of specific phrases that cause a problem with verbiage substitutes that appear to work. I have been struggling to avoid phrasing with landmine-speak for… I don’t know how long. It’s extremely difficult. I would be happy to contribute my lang-mines and replacements to a communal list if you were to create it. I’d go make the post right now, but I don’t want to steal karma away from you, Nyan, so I leave you with the opportunity to choose to do it first and structure it the way you see fit.
First off, it’s in main, not discussion. That would be why it’s not in the discussion list.
Secondly, do-ocracy. If you do it, I will accept your legitimacy as supreme dictator of it, and contribute what I can. I’m working on the next one (and enjoying some quality 4chan time), so you’re not stepping on my toes.
Also, step on more toes. It’s not like anyone can fire you.
Oh. Fixed. Grumbles something about getting more sleep.
You are informing me that this is the social norm at LW, or is this your personal idea of the way we should behave?
Oh, I do, but I usually choose my battles. Thanks though.
starts to consider the best way to structure this list
I dunno about LW, but that’s how legitimacy works in most technical volunteer communities (hackerspaces, open source, etc). I also think it’s a good idea.
As is common with social systems there are a lot of hidden complications, but yes LW do-cratic.
This is good personal advice but a bad norm for a community to hold. Can you see why?
You might be walking around a landmine in order to get to the nuclear bomb. So long as you’re defusing something, I think you’re ok.
But I agree that we don’t want to become too habitual at avoiding such issues.
I suspect this is best to do before the “landmine” (which I read as being similar to a factional disagreement, correct me if I’m wrong) gets set off… that is, try to explain away the landmine before it’s been triggered or the person you’ve talked to has expressed any kind of opinion.
Yes. I was actually thinking of this as a project on its own. Separate from any given operation, it’s a good idea to clear the field of landmines. Trying to do it as part of some other conversation is just begging to set something off and destroy the conversation, not just the minesweeping attempt.
For me, reading LW is like removing the landmines. Many articles are essentially: “if you read or hear this, don’t automatically react this way, because that’s wrong”.
The problem is, when there are hundreds of landmines, you need to remove hundreds of them. Even if you know exactly in advance where your path will go, and you only clean landmines in the path, there is still at least a dozen of them. And all this explaining takes time. So in order to get some explanation safely across, you first need to educate the person for a while. Which feels like manipulation. And cannot be done when time is short.
WTF!? Nearly every consequentialist I know is in favor of the Capitalized Moral Theory while admitting that virtue ethics and deontology might work better as everyday heuristics.
I find that small-c consequentialism is a huge step up from operating on impulse, but “deontological injuctions” are good to special case expensive and tricky situations.
However, most of the good deontological injunctions that I know of are phrased like “don’t do X, because Z will happen instead of the naive Y”, which is explicitly about consequences and expectations. Likewise for “don’t do X, because the conclusion that X is right is most likely caused by an error”.
The difference is more between cacheing and recomputing than between deontology and consequentialism.
Actually, I don’t even know what the moral theory “Consequentialism” means...
Those don’t seem like deontological injunctions to me at all. Can you clarify why you call them that?
I’ve heard people refer to something that does approximately that job as deontological injunctions. I don’t much like the name either.
Do you have a real example of deontology outperforming consequentialism IRL? Bonus: do you have one that isn’t just hardcoding a result computed by consequentialism? (not intended to be a challenge, BTW; actually curious, because I’ve heard a lot of people implying, but can’t find any)
I’m not sure what that would look like. If consequentialism and deontology shared a common set of performance metrics, they would not be different value systems in the first place.
For example, I would say “Don’t torture people, no matter what the benefits of doing so are!” is a fine example of a deontological injunction. My intuition is that people raised with such an injunction are less likely to torture people than those raised with the consequentialist equivalent (“Don’t torture people unless it does more good than harm!”), but as far as I know the study has never been done.
Supposing it is true, though, it’s still not clear to me what is outperforming what in that case. Is that a point for deontological injunctions, because they more effectively constrain behavior independent of the situation? Or a point for consequentialism, because it more effectively allows situation-dependent judgments?
At least one performance metric that allows for the two systems to be different is: “How difficult is the value system for humans to implement?”
They can be different metaethically, but the same at the object level.
Seems to me that even supposedly deontologic arguments usually have some (not always explicit) explanation, such as ”...because God wants that” or ”...because otherwise people will not like you” or maybe even ”...because the famous philosopher Kant would disagree with you”. Although I am not sure whether those explanations were present since the beginning, or whether just my consequentialist mind adds them when modeling other people. Do you know real deontologists that really believe “Do X and don’t do Y” without any explanation whatsoever? (How would they react if you ask them “why”?)
Assuming my model of deontologists is correct, then their beliefs are like “Don’t torture people, no matter what the benefits of doing so are, because God does not want you to torture people!” Then all it needs is a charismatic priest who explains that, for some clever theological reasons, God actually does not mind you torturing this specific person in this specific situation. (For other implicit explanations, add other convincing factors.) For a example, a typical deontologist norm “Thou shalt not kill” was violated routinely, often with a clever explanation why it does not apply for this specific class of situations.
Without any explanation? No.
Without any appeal to expected consequences? Yes.
In general, the answer to “why?” from these folks is some form of “because it’s the right thing to do” and “because it’s wrong.” For theists, this is sometimes expressed as “Because God wants that,” but I would not call that an appeal to expected consequences in any useful sense. (I have in fact asked “What differential result do you expect from doing X or not doing X?” and gotten the response “I don’t know; possibly none.”)
Just for clarity, I’ll state explicitly that most of the self-identified deists I know are consequentialists, as evidenced by the fact that when asked “why should I refrain from X?” their answer is “because otherwise you’ll suffer in Hell” or “because God said to and God knows a lot more than we do and is trustworthy” or something else in that space.
The difference between that position and “because it’s wrong” or “because God said to and that means it’s wrong” is sometimes hard to tease out in casual conversation, though.
It may be worth adding that in some sense, any behavioral framework can be modeled in utilitarian terms. That is, I could reply “Oh! OK, so you consider doing what God said to be valuable, so you have a utility function for which that’s a strongly weighted term, and you seek to maximize utility according to that function” to a theist, or ”...so you consider following these rules intrinsically valuable...” to a nontheist, or some equivalent. But ordinarily we don’t use the label “utilitarian” to refer to such people.
Sure. In much the same sense that I can convince a consequentialist to torture by cleverly giving reasons for believing that the expected value, taking everything into account, of torturing this specific person in this specific situation is positive. As far as I know, no choice of value system makes me immune to being cleverly manipulated.
The agents that can be modeled as having a utility function are precisely the VNM-rational agents. Having a deontological rule that you always stick to even in the probabilistic sense is not VNM-rational (it violates continuity). On the other hand, I don’t believe that most people who sound like they’re deontologists are actually deontologists.
That’s interesting, can you elaborate?
This is something of a strawman, but suppose one of your deontological rules was “thou shalt not kill” and you refused to accept outcomes where there is a positive probability that you will end up killing someone. (We’ll ignore the question of how you decide between outcomes both of which involve killing someone.) In the notation of the Wikipedia article, if L is an outcome that involves killing someone and M and N are not, then the continuity axiom is not satisfied for (L, M, N).
Behaving in this way is more or less equivalent to having a utility function in which killing people has infinite negative utility, but this isn’t a case covered by the VNM theorem (and is a terrible idea in practice because it leaves you indifferent between any two outcomes that involve killing people).
I’m trying to avoid eliding the difference between “I think the right thing to do is given by this rule” and “I always stick to this rule”… that is, the difference between having a particular view of what morality is, vs. actually always being moral according to that view.
But I agree that VNM-violations are problematic for any supposedly utilitarian agent, including humans who self-describe as deontologists and I assert above can nevertheless be modeled as utilitarians, but also including humans who self-describe as utilitarians.
Yes, and in my experience consequentialists usually have deontological sounding explanations for their choice of utility function.
How would a conventionalist react if I asked why maximize [utility function X]?
And all that a consequentialist needs to start torturing people is a clever argument for why torturing this specific person in this specific situation maximizes utility.
TO be clear, are you saying they both have the same response or that this is also a valid criticism of consequentialism?
I’m saying this is also a valid criticism of consequentialism.
Thanks for clarifying.
I’m told that these explanations fall under the realm of meta-ethics. As far as I (not being a deontologist) can tell, all deontological ethical systems rely on assuming some basic maxim—such as “because God said so”, or “follow that rule one would wish to be universal law”.
I don’t see how deontology would work without that maxim.
For a historical example of exactly this, see the Spanish Inquisition. (They did torture people, and I did once come across some clever theological reasons for it, in which it is actually quite difficult to find the flaw).
That there is a basic maxim doesn’t mean there isn’t a) an explanation of why that maxim is of overriding importance and b)an explanation of how that maxim leads to particular actions.
Presumably meaning that it isn’t obvious how you get to (a) and (b). Phils. are very aware that you need to get to (a) and (b) and have argued elaborately (see Kant) towards them. (Has anyone here read so much as one wiki or SEP page on the subject?)
Right. This thread is full of bizarrely strawmanish characterizations of deontology.
Quite. In order to have a good deontological basis of ethics, both (a) and (b) are necessary; and I would expect to find both. These build on and enhance the maxim on which they are based; indeed, these would seem, to me, to be the two things that change a simple maxim into a full deontological basis for ethics.
Was it by any chance “If we don’t torture these people they’ll go to hell, which is worse than torture”?
That’s a large part of it, but not all of it. I can’t quite remember the whole thing, but I can look it up in a day or two.
Well, in a purely deontologic moral system, the beliefs are like “Don’t torture people, torturing people is bad”. That is, there is a list of “bad” things, and the system is very simple: You may do thing X if and only if X is not on the list. The list is outside the system. In the same way as consequentialism does not provide you with what you should place utility in, deontology does not tell you what the list is.
So when you look at it like that, what the charismatic priest is doing is not inside the moral system, but rather outside it. That is, he is trying to get his followers to change what is on their lists. This is no different from a ice cream advertisement trying to convince a consequentialist that they should place a higher utility on eating ice cream.
To summarize, the issue you are talking about is not one meant to be handled by the belief system itself. The priest in your example is trying to hack people by changing their belief system, which is not something deontologists in particular are susceptible to beyond anyone with a different system.
Are you asserting that purely deontologic systems don’t include good things which it is preferable to do than leave undone, or that it is mandatory to do, but only bad things which it is mandatory to refrain from doing?
And are you asserting that purely deontologic systems don’t allow for (or include) any mechanism for trading off among things on the list? For example, if a moral system M has on its list of “bad” things both speaking when Ganto enters my tent and not-speaking when Ganto enters my tent, and Ganto enters my tent, then either M has nothing to say about whether speaking is better than not-speaking, or M is not a purely deontologic system?
If you’re making either or both of those assertions, I’d be interested in your grounds for them.
Is there any such a “pure” system? Deontological metaethics has to put forward a justifications because it is philosophy. I don’t see how you can arrvie at yoru conclusion without performing the double whammy of both ignoring what people who call themselves deontologists say, AND dubbing the attitudes of some unreflective people who don’t call themseles deontologists “deontology”.
What if not torturing people requires you to torture person, like in the trolley problem? What then? Do deontologists not care about torturing those people because they did not personally torture them, or do they secretly do consequentialism and dress it up in “rules” after the fact?
I see everything in terms of AI algorithms; with consequentialism, I imagine a utility-maximizing search over counterfactuals in an internal model (utility-based agent), and with deontology, I imagine a big bunch of if-statements and special cases (a reflex-based agent).
No, archetypal deontologists don’t torture people even to prevent others from being tortured. Not pushing the guy off the bridge is practically the definition of a deontologist.
Exactly. I am not advocating dentology, just clarifying what it means. A true deontologist who thought that torture is bad would not torture anyone, no matter what the circumstances. Obviously, this is silly, and again, I do not advocate this as a good moral system.
No. That is not obvious. You are probably being misled by thought experiments where it is stipulated that so-and-so really does have life-saving information and really would yield it under torture. In real life, things are not so simple. You might have the wrong person. They might be capable of resisiting torture. They might die under torture. They might go insane under torture They might lie and give you disonformation that harms your cause. Your cause might be wrong..you might be the bad guy...
Real life is always much more complex and messy than maths.
No it isn’t. It’s just more complex and messy than trivial maths.
If its too non-trivial for your brain to handle as maths, it might as well not be maths
If we look at it from a consequentialist perspective, any success for another meta-ethical theory will boil down to caching a complicated consequential calculation. It’s just that even if you have all the time in the world, it’s still hard to correctly perform that consequential calculation because you’ve got all of these biases in the way. Take the classic ticking time-bomb scenario, for an example: everyone would like to believe that they’ve got the power to save the world in their hands; that the guy they have caught is the right guy; that they have caught just in the nick of time. But nobody can produce an example where this has ever been true. So, the deontological rule turns out to be equivalent to the outside view consequentialist argument for ticking time-bomb scenarios. Naturally, other cases of torture would require more analysis to show the equivalence (or non-equivalence; I have not yet done the research to know which way it would come out).
I don’t see how you could perform a meaningful calculation without presuming which system is actually right. Who wants an efficient programme that yields the wrong results?
That very much depends on who benefits from those wrong results.
so I can say moral system A outperforms moral system B just where A serves my selfish purposes better than B. Hmm. Isn’t that a rather amoral way of looking at morality?
No.
It’s an honest assessment of the state of the world.
I’m not agreeing with that position, I’m just saying that there are folks who would prefer an efficient program that yielded the wrong results if it benefited them, and would engage in all manner of philosophicalish circumlocutions to justify it to themselves.
That’s not very relevant to the benefits or otherwise fo consequentialism and deontology.
Are there any non tricky situations? Stepping on a butterfly could have vast consequences.
Consequentialism works really well as an everyday heuristic also. Particularly when it comes to public policy.
That would require being able to predict the results of public policy decisions with a reasonable degree of accuracy.
Wouldn’t a rational consequentialist estimate the odds that the policy will have unpredictable and harmful consequences, and take this into consideration?
Regardless of how well it works, consequentialism essentially underlies public policy analysis and I’m not sure how one would do it otherwise. (I’m talking about economists calculating deadweight loss triangles and so on, not politicians arguing that “X is wrong!!!”)
The discussion was about consequentialist heuristics, not hypothetical perfectly rational agents.
I think the central trick is that you don’t aim at the ultimate good in public policy, just things like fairness, aggreegated life years and so on. You can decide that spending a certain amount of money will save X lives in road safety, or Y lives in medicine, and so on, without worrying that you might be saving the life of the next Hitler.
My point still stands.
Maybe, but it doesn’t reflect back on the usefulness of c-ism as a fully fledged moral theory.
This discussion was about consequentialism as an everyday heuristic.
I don’t know about you, but I don’t do public policy on a day to day basis.
Few of us do public policy on a daily basis, but many of us have opinions on it, and of those that don’t most of us have friends that do. Not that having correct policy judgment buys you much in that context, consequentially speaking; I think the hedonic implications would probably end up being decided by how much you need to keep beliefs on large-scale policy coherent with beliefs on small-scale matters that you can actually affect nontrivially.
People that have little chance of ever actually doing public policy have virtually no incentive to have true beliefs about it and even if they did, they likely wouldn’t get enough feedback to know if their heuristics about it are accurate.
Unfortunately, the same frequently applies to the people who actually do do public policy, especially since they are frequently not effected by their own decisions.
True enough.
So when you see a moral theory, think heuristic, not dogma.
The more I think about this the more I realize I need a term that means “Even though the people who are now jumping all over me about some philosophical subject are talking about a memetic war that’s currently raging, and even though they’re not mindless zombies saying completely useless things in support of their points, they’ve chosen to hyper-focus on some piece of my conversation not essential to the main point, and have derailed the conversation.”
I want to use “philosophical landmines” for that, but now that I re-read your article, I realize I can’t. This is because the implication you created is that the person who steps on the landmine has temporarily turned into an argument-shooting NPC.
I’m not sure what should be done about this, if that means I should seek a different term (ideas?), or suggest that a new article for that variation be developed (seems redundant) or request an edit to this one to include current memetic wars and lucid combatants who, though not straw-manning you, have chosen to completely derail your conversation because you triggered them with your special ignorant wording.
Do you feel that an edit to include triggering non-zombies who pursue current wars in irrelevant places would be a good idea? If not, then what do you suggest?
Also important: As a continuation of my interest in beginning a communal list of philosophical landmines, I’ve been thinking about various philosophical landmines that I’ve accidentally laid, looking for good examples. In each case, I’ve considered that it might be a bad idea to post them because this may insult other people due to the implication of mindlessness, or because the person responding was not mindless, so it’s actually inappropriate as an example. Also, we’d probably be more constructive if we could talk openly about philosophical landmines when they arise, which would require removing the insulting insinuations. I think the concept of the philosophical landmine would be far more useful and way easier to create a collection of examples for if the insulting insinuation was taken out. So, an edit would be my preference.
Note: I am delayed in producing this communal list until I somehow resolve the insulting insinuations obstacle.
Compare to this other kind of landmine.
Seems to be the same kind of landmine in a different yard. Being on team consequentialism probably isn’t much different than being on team green in terms of the us vs. them type psychological mechanisms invoked
Yes, I invented the idea of “philosophical hurdles” ages ago. For every “topic” there are some set of philosophical hurdles you have to overcome to even start approaching the right position. These hurdles can be surprisingly obscure. Perhaps you don’t have quite enough math exposure and your concepts of infinity are a bit off, or you’re aren’t as solidly reductionistic as others and therefore your concept of identity is marginally weakened—anything like these can dramatically alter your entire life, goals, and decisions—and it can take a significant amount of time and motivated conversation before you can debug someone’s belief system to identify exactly what hurdle they are stuck on (and fixing it is a whole other problem, usually there are hurdles to overcome before you can overcome hurdles, and then more before that...). And any small confusion in metaphysics or epistemology (let alone ethics or politics) seems to limit your soul to a certain point, beyond which you can’t proceed to the next philosophic hurdle. And these are pretty advanced problems to have anyway, you have to already be pretty intelligent, well-educated, and have some strong rationalist skills to really have a chance at overcoming the sheer burden of human bias in the first place.
Invented for yourself, but such hurdles are very old.
Oh, excellent. I’m one of today’s lucky 10,000!
I call it something like correctly identifying the distinction(s) relevant to our contextually salient principles. That’s more general and the emphasis is different, but it involves the same kind of zeroing in on the semantic content that you intend to communicate.
Sure, you can have principles like consequentialism, but why bring them up unless the principles themselves are relevant to what you’re saying at this moment? Principles float in the conversational background even when you’re not talking about them directly. Discussing the principle of the thing can be useful, but such talk is often subrelevant, so to speak, even when you’re making use of said principle. (“Subrelevant” because everything is relevant to something or other. I don’t believe in total irrelevance—even incoherent babbling is meaningful to a discussion of the subject’s state of mind, if nothing else. It’s “incoherent” in the sense that the semantic content expressed therein is less than relevant with respect to the concerns of some conversation or presumed conversation.) And sometimes it’s the other way around, when the disagreement really is about principles.
What the conversation is Really About can be established by scrutinizing the various positions and perceiving the precise nature of the contrasts between them. This can be done objectively as soon as the sides are willing to commit to their statements, by identifying the principles invoked by each, examining the manners in which they clash, what bearing they have on the matter at hand, etc.
It seems I’m carrying around what almost amounts to a skeleton of a theory on the subject. I’m not sure where it came from, though I’m afraid it may not be particularly original. I’ve even come up with terminology for things like “cached thoughts”. I refer to distinctions drawn from stale contexts that are liable to obsessively pop up in every conversation, either out of habit or cultivated as part of ideological conditioning, as “intrusive distinctions”. There is a counterpart to these—blind spots that are routinely ignored, either because no one brings them up or everyone is determined not to see them, but I haven’t named those yet. “Extrusive”? No, that doesn’t really make sense.
Sorry for rambling on. As you can see, all this theorizing is for my benefit.
My current strategy is that I just don’t talk to most people about confusing things in general. If I don’t expect that they’ll actually do anything differently as a result of our having talked, all that’s left is signaling, and I don’t have a strong need to signal about confusing things right now.
Another excellent post, Ryan! (sorry, I blame the autocorrect)
I like your list of the common landmines (actually, each one is a minefield in itself).
Right, modeling other people is a part of accurately mapping the territory (see how I cleverly used the local accessible meme, even though I hate it?), which is the first task of rationality.
In other words, if your map of the minefield sucked and you got blown up, don’t pretend that you didn’t, refusal to update only makes your map worse.
That’s a part of your first prong, no?
Lulz. This is the last time that joke will be funny.
Are you being especially ornery today?
It involves the first one, and is maybe related to it. Ontologies aren’t fundamental. Think of it however is convenient to you.
Ornery, not lighthearted? Geez, talk about miscommunication and landmines. I guess I’ll avoid commenting further, at least for today.
I didn’t know how to read that, actually. I came up with some mildly amusing guesses as to what tone that might have been and what might have brought it on, but they were all over the map*.
… * not the territory. They were all in my head.
This post is currently reading at −3, but it’s not hidden by default for me. My Preferences page indicates that things below −2 should be hidden. I’ve checked another comment (at −6), which is correctly hidden. Anyone know why this is occurring? Apologies for the meta.
A while ago Eliezer decreed that comments below −3 are too be hidden, apparently this overrides all preferences.