In short, I don’t think I buy your claim that “Some empirical statements, orthogonal to truth or falsity, are offensive.” At least, I’d like to see it supported better before I consider it.
Some examples of empirical statements with questionable-to-bad ethical undertones. I present them to you as food for thought, not as some sort of knock-down argument.
“Your husband’s corpse is currently in an advanced stage of decomposition. His personality has been completely annihilated. Remember how he sobbed on his deathbed about how afraid he was to die?” (Reminding a person of a bad thing they don’t want to think about.)
“Ladies and gentlemen of the jury, here are twenty police case files on convicted child murderers, all of them Albanian just like the defendant, without any statistical context.” (Facts presented in a tendentious manner.)
“Just thought it might be interesting for you to know that women tend to do about 10% worse on this test than men. Anyway, you may turn your papers over now—good luck!” (Self-fulfilling prophesies.)
“You’re the only asian in our office.” “Did you notice how you’re the only asian in our office?” “Maybe you didn’t realize you’re the only asian in our office.” (Drawing attention to & thereby amplifying the salience of an ingroup/outgroup distinction.)
“All I’m saying is that girls who wear revealing clothing are singling themselves out for attention from predators!” (Placing blame for a moral harm on a blameless causal link leading to the harm.)
“If he dresses effeminately like that, he’s going to get bullied.” (Ditto; also, status quo bias.)
“A black man will never hold the highest office in this country.” (Self-fulfilling prophesy; failure to acknowledge shittiness of (purported) empirical situation.)
I think that the ability and right to say true things regardless of whether someone finds those truths unpleasant is extremely important, and social norms to the contrary should not be adopted or perpetuated lightly.
Not lightly, no. But as I was saying to Daniel_Burfoot above, there is just no avoiding the fact that statements, including statements of truth, are speech-acts. They will affect interlocutors’ probability distributions AND their various non-propositional states (emotions, values, mood, self-worth, goals, social comfort level, future actions, sexual confidence, prejudices). Inconvenient as human mind-design is, it’s really hard to suppress that aspect of it.
But there is a big asymmetry here—you (the speaker) know what you mean, so if it really needs to be said, take an extra second to formulate it in the way that has the least perlocutionary disutility.
Some examples of empirical statements with questionable-to-bad ethical undertones. I present them to you as food for thought, not as some sort of knock-down argument.
These are food for thought indeed. My thoughts on some of them, intended as ruminations and not refutations:
“Your husband’s corpse is currently in an advanced stage of decomposition. His personality has been completely annihilated. Remember how he sobbed on his deathbed about how afraid he was to die?” (Reminding a person of a bad thing they don’t want to think about.)
I’m not sure what I think about this one. I do note that it would probably be perceived differently by someone who was aware of its truth (this person would certainly be hurt by the reminder of the bad thing), than by someone who was not (i.e. a religious person).
“Ladies and gentlemen of the jury, here are twenty police case files on convicted child murderers, all of them Albanian just like the defendant, without any statistical context.” (Facts presented in a tendentious manner.)
Exploitation of cognitive biases in the audience. Certainly an unethical and underhanded tactic, but note that its effectiveness depends on insufficient sanity in the listeners. Granted, however, that the bar for “sufficient sanity” is relatively high in such matters.
“Just thought it might be interesting for you to know that women tend to do about 10% worse on this test than men. Anyway, you may turn your papers over now—good luck!” (Self-fulfilling prophesies.)
This one is interesting. A tangential thought: have there been studies to determine the power of stereotype threat to affect people who are aware of stereotype threat?
“You’re the only asian in our office.” “Did you notice how you’re the only asian in our office?” “Maybe you didn’t realize you’re the only asian in our office.” (Drawing attention to & thereby amplifying the salience of an ingroup/outgroup distinction.)
I think I’d have to agree that harping on such a fact would be annoying, at best. I do want to note that one solution I would vehemently oppose would be to forbid such statements from being made at all.
“All I’m saying is that girls who wear revealing clothing are singling themselves out for attention from predators!” (Placing blame for a moral harm on a blameless causal link leading to the harm.)
There’s something wrong with your assessment here and I can’t quite put my finger on it. Intuitively it feels like the category of “blame” is being abused, but I have to think more about this one.
“If he dresses effeminately like that, he’s going to get bullied.” (Ditto; also, status quo bias.)
The problem here, I think, is that some people use “X is going to happen” with the additional meaning of “X should happen”, often without realizing it; in other words they have the unconscious belief that what does happen is what should happen. Such people often have substantial difficulty even understanding replies like “Yes, X will happen, but it’s not right for X to happen”; they perceive such replies as incoherent. The quoted statement can well be true, and if said by someone who is clear on the distinction between “is” and “ought”, is not, imo, offensive.
“A black man will never hold the highest office in this country.” (Self-fulfilling prophesy; failure to acknowledge shittiness of (purported) empirical situation.)
See above. Also, there’s a difference between “A black man will never hold the highest office in this country, and therefore I will not vote for Barack Obama” and “A black man will never hold the highest office in this country; this is an empirical prediction I am making, which might be right or wrong, and is separate from what I think the world should be like.”
If I think X will happen (or not happen), it’s important (imo) that I have the ability and right to make that empirical prediction, unimpeded by social norms against offense. If people who are afflicted with status quo bias, or other failures of reasoning, fail to distinguish between “is” and “ought” and in consequence take my prediction to have some sort of normative content — well, it may be flippant to say “that’s their problem”, but the situation definitely falls into the “audience is insufficiently intelligent/sane” category. Saying “this statement is offensive” in such a case is not only wrong, it’s detrimental to open discourse.
I happen to be reading Steven Pinker’s The Blank Slate right now, and he comments on that well-known failing of twentieth-century social sciences, the notion that “we must not even consider empirical claims of inequality in people’s abilities, because that will lead to discrimination”. Aside from the chilling effect this has on, you know, scientific inquiry, there’s also an ethical problem:
If you think that pointing out differences in ability will lead to discrimination, then you must think that it’s not possible to treat people with equal fairness unless they are the same along all relevant dimensions. That’s a fairly clear ethical failing. In other words, if your objection to “some people are less intelligent than other people” is “but then the less intelligent people will be discriminated against!”, you clearly think that it’s not possible to treat people fairly regardless of their intelligence… and if that’s the case, then that is the problem we should be opposing. We shouldn’t say “No no, all people are the same!” We should say, “Yes, people are different. No, that’s not an excuse to treat some people worse.”
Not lightly, no. But as I was saying to Daniel_Burfoot above, there is just no avoiding the fact that statements, including statements of truth, are speech-acts. They will affect interlocutors’ probability distributions AND their various non-propositional states (emotions, values, mood, self-worth, goals, social comfort level, future actions, sexual confidence, prejudices). Inconvenient as human mind-design is, it’s really hard to suppress that aspect of it.
Agreed. I just think that branding certain sorts of statements as “offensive” is entirely the wrong way to go about treating this issue with the care it deserves, because of the detrimental effects that approach has on free discourse.
But there is a big asymmetry here—you (the speaker) know what you mean, so if it really needs to be said, take an extra second to formulate it in the way that has the least perlocutionary disutility.
Agreed, and I think this is a special case of the illusion of transparency.
(P.S. Today I learned the word “perlocutionary”. Thank you.)
As an aside, I almost forgot a really good example of the phenomenon of “harmful facts,” which is that the suicide rate in a region goes up whenever a suicide is reported on the news. Indeed, death rates in general go up whenever a suicide is reported, because many suicides are not recognized as such (e.g., somebody steers into oncoming traffic).
For this reason, police tend to hush suicides up (at least, they did in my old hometown & I think it’s widespread).
I do note that it would probably be perceived differently by someone who was aware of its truth (this person would certainly be hurt by the reminder of the bad thing), than by someone who was not (i.e. a religious person).
Maybe, although I strongly suspect religious people alieve that their relatives are gone (otherwise, as others have noted, a funeral would be more like a going-away party).
This one is interesting. A tangential thought: have there been studies to determine the power of stereotype threat to affect people who are aware of stereotype threat?
Good question. Wikipedia turns up this link, which would seem to say “Yes.” So happily, the corrective for this contextually harmful empirical statement is a contextually helpful empirical statement.
...one solution I would vehemently oppose would be to forbid such statements from being made at all.
Oh yes, certainly. Refusing to notice ingroup/outgroup differences is just the opposite failure mode.
There’s something wrong with your assessment (of the revealing clothing --> sexual assault case) here and I can’t quite put my finger on it. Intuitively it feels like the category of “blame” is being abused, but I have to think more about this one.
I am still philosophically confused about this issue, although I have been thinking about it for a while. You are probably objecting to the fact that ex hypothesi, less revealing clothing leads to fewer sexual assaults, so why wouldn’t we follow that advice—yes? As I say, I don’t have a full account of that. All I wanted to draw attention to is the ethical questionable-ness of making such a statement without any acknowledgement that one is asking potential victims to change their (blameless) behaviour in order to avoid (blameworthy) assault from others. Compounding the issue is the suspicion that statements like this ALSO tend to be a form of whitewashed slut-shaming.
The problem here, I think, is that some people use “X is going to happen” with the additional meaning of “X should happen”, often without realizing it; in other words they have the unconscious belief that what does happen is what should happen. Such people often have substantial difficulty even understanding replies like “Yes, X will happen, but it’s not right for X to happen”; they perceive such replies as incoherent.
Yes, in my experience this is very common in muggle society.
If I think X will happen (or not happen), it’s important (imo) that I have the ability and right to make that empirical prediction, unimpeded by social norms against offense. If people who are afflicted with status quo bias, or other failures of reasoning, fail to distinguish between “is” and “ought” and in consequence take my prediction to have some sort of normative content — well, it may be flippant to say “that’s their problem”, but the situation definitely falls into the “audience is insufficiently intelligent/sane” category. Saying “this statement is offensive” in such a case is not only wrong, it’s detrimental to open discourse.
Right. The rubric that I try to use in such situations is essentially a consequentialist one. Roughly speaking, the idea is that you should try to predict how your statements might be misinterpreted by a (possibly silly) audience, and if the expected harm of the misinterpretation is significant as compared to the potential benefit of your statement, then reformulate/be silent/narrow your audience/educate your audience about why they shouldn’t misinterpret you. I sympathize, believe me! It’s incredibly annoying to be read uncharitably. But if you know how to prevent an uncharitable/harmful reading, and don’t as a matter of principle because the audience should know better… I think the LW term for that would be “living in the should-universe.”
Agreed. I just think that branding certain sorts of statements as “offensive” is entirely the wrong way to go about treating this issue with the care it deserves, because of the detrimental effects that approach has on free discourse.
As it happens, I broadly agree about the term “offensive,” which is an incredibly censorious and abuse-prone word. I think we should try to give better fault assessments than that—and happily, on LW most people usually do.
I am still philosophically confused about this issue, although I have been thinking about it for a while. You are probably objecting to the fact that ex hypothesi, less revealing clothing leads to fewer sexual assaults, so why wouldn’t we follow that advice—yes? As I say, I don’t have a full account of that. All I wanted to draw attention to is the ethical questionable-ness of making such a statement without any acknowledgement that one is asking potential victims to change their (blameless) behaviour in order to avoid (blameworthy) assault from others.
Would you have similar objections if I advised you to lock your house to reduce theft?
If the context is that you (or others) are telling me that it wasn’t the thief’s fault that they stole my TV, or that the fact that my house was unlocked is evidence that I consented to the taking of my TV, that context may make the advice seem part and parcel of the blame-shifting.
For that matter, the reason to lock your house may well be to avoid being low-hanging fruit — IOW, someone else’s TV gets stolen, not yours; theft is not actually reduced, just shifted around. There’s no guarantee that everyone locking their house would reduce theft. The thieves learn to pick locks and everyone’s costs are higher — but now a person who doesn’t pay that cost is stigmatized as too foolish to protect themselves.
As an old boss of mine used to say, “locks are to keep your friends out.” They work against casual intruders, not committed ones.
If the context is that you (or others) are telling me that it wasn’t the thief’s fault that they stole my TV, or that the fact that my house was unlocked is evidence that I consented to the taking of my TV
That also depends. An insurance company would be well within its rights to charge you a higher premium if you refused to lock your house.
Right — but an insurance company would do that even if it didn’t reduce theft overall, but merely shifted theft away from their insured customers onto others. It could even be negative-sum thanks to the cost of locks. If we actually want to reduce theft overall, shifting it around doesn’t suffice.
That is, no-one here is arguing for that position. I am well aware that there are people out there who hold all sorts of unjustifiable beliefs, but conflating then with my reasonable claims is logically rude.
I do note that it would probably be perceived differently by someone who was aware of its truth (this person would certainly be hurt by the reminder of the bad thing), than by someone who was not (i.e. a religious person).
Maybe, although I strongly suspect religious people alieve that their relatives are gone (otherwise, as others have noted, a funeral would be more like a going-away party).
One counter-example: In Julia Sweeney’s Letting Go of God (an account of how Bible study eventually led a Catholic to become an atheist) , she says that accepting that there is no afterlife led to her having to mourn all her relatives again.
Perhaps there is something between verbal belief and gut-level alief.
Perhaps there is something between verbal belief and gut-level alief.
Alternative hypothesis: some religious people are mourning the fact that they will never be able to interact with the person again, not the fact that the person’s mind has been irrevocably destroyed.
“All I’m saying is that girls who wear revealing clothing are singling themselves out for attention from predators!” (Placing blame for a moral harm on a blameless causal link leading to the harm.)
What moral theory are you using in the parenthetical comment? For example, according to naive utilitarianism it makes no sense to divide causal links leading to harm into “blameless” and “blameworthy”.
Right, because naive utilitarianism sees ‘blame’ as more or less a category error, since utilitarianism is fundamentally just an action criterion. My own moral system is a bit of a hodgepodge, which I have sometimes called Ethical Pluralism.
As I say to Said below, I don’t have a full theory of blame and causality, although I think about it most every day. But I do think that there is something wrong/incomplete/unbalanced about blaming somebody for being part of a causal chain leading to a bad outcome, even if they are knowingly a part of that chain. For example, Doctor Evil credibly commits to light a school on fire if you don’t give him $10 million. I would consider refusal to pay up in this situation as non-blameworthy, even though it causally leads to a bunch of dead schoolchildren.
For example, Doctor Evil credibly commits to light a school on fire if you don’t give him $10 million. I would consider refusal to pay up in this situation as non-blameworthy, even though it causally leads to a bunch of dead schoolchildren.
The difference between the Dr. Evil example and the revealing clothing example is that if everyone precomits to not negotiating with hostage takers, Dr. Evil wouldn’t even bother with his threat; whereas a precomitment to ignore the presence of sexual predators when deciding what to wear won’t discourage them. The clothing example is in fact similar to the locked house example, I mentioned here.
Yes. I think that all deontological or virtue-ethics rules that actually make sense are actually approximations to rule consequentialism when it’d be too computationally expensive to compute from scratch and/or fudge factors to compensate for systematic errors introduced by our corrupted hardware.
Game theory issues I mentioned (e.g., UDT, the other big one being Schelling points) are not quite the same thing as having bad approximations. Since it’s impossible to have a good approximation of another agent of comparable power, even in principal.
I didn’t mean the approximations are bad. I meant that the ‘fundamental’ morality is rule (i.e. UDT) consequentialism, and the only reason we have to use other stuff is that we don’t have unlimited computational power, much like we use aerodynamics to study airplanes because it’s unfeasible to use quantum field theory for that.
My point is that once you add UDT to consequentialism it becomes very similar to deontology. For example, Kant’s Categorical Imperative can be thought of as a special case of UDT.
My point is that once you add UDT to consequentialism it becomes very similar to deontology.
UDT doesn’t need to be added to consequentialism, or the reverse. UDT is already based on consequentialist assumptions and any reasonably advanced way of thinking about consequences will result in a decision theory along those lines.
It is only people’s muddled intuitions about UDT and similar reflexive decision theories that makes it seem to them that they are remotely deontological. Particularly those inclined to use UDT as an “excuse” to cooperate when they just want that to be the right thing to do for other reasons.
For example, Kant’s Categorical Imperative can be thought of as a special case of UDT.
It is only people’s muddled intuitions about UDT and similar reflexive decision theories that makes it seem to them that they are remotely deontological.
From what I infer, people who think deontologically already seem to reason “The most effective decision to make as evaluated by UDT is Cooperate in this situation in which CDT picks Defect. This feels all moral to me. UDT must be on my side. I claim UDT is deontological because we agree regarding this particular issue.” This leads to people saying “Using UDT/TDT reasoning...” in places where UDT doesn’t reason in any such way.
UDT is “deontological” if and only if that deontological system consists of or is equivalent to the rule “It is an ethical duty to behave like a consequentialist implementing UDT”. ie. It just isn’t.
“Your husband’s corpse is currently in an advanced stage of decomposition. His personality has been completely annihilated. Remember how he sobbed on his deathbed about how afraid he was to die?” (Reminding a person of a bad thing they don’t want to think about.)
I got away with a mild version of that one—A friend’s mother had just died, and I said “This is a world where people die”, and it went over well. However, my friend had been doing meditation seriously for a while.
“Just thought it might be interesting for you to know that women tend to do about 10% worse on this test than men. Anyway, you may turn your papers over now—good luck!” (Self-fulfilling prophesies.)
I actually got hit with a version of that—right before I started college there was an assembly where they handed out papers with correlations between SATS, high school average, and success in college. I had a bad combination with my SATS much better than my GPA. I can remember thinking “Then I might as well give up.”
That wasn’t a sensible thought, but it wasn’t sensible for them to give out those papers without saying something like “and here’s counselling” or “high SAT/low GPA means you need to develop better work habits” or some such.
“If he dresses effeminately like that, he’s going to get bullied.”
Aside from the issues you’ve raised, it also implies that there’s nothing to be done, not even martial arts school.
Some examples of empirical statements with questionable-to-bad ethical undertones. I present them to you as food for thought, not as some sort of knock-down argument.
“Your husband’s corpse is currently in an advanced stage of decomposition. His personality has been completely annihilated. Remember how he sobbed on his deathbed about how afraid he was to die?” (Reminding a person of a bad thing they don’t want to think about.)
“Ladies and gentlemen of the jury, here are twenty police case files on convicted child murderers, all of them Albanian just like the defendant, without any statistical context.” (Facts presented in a tendentious manner.)
“Just thought it might be interesting for you to know that women tend to do about 10% worse on this test than men. Anyway, you may turn your papers over now—good luck!” (Self-fulfilling prophesies.)
“You’re the only asian in our office.” “Did you notice how you’re the only asian in our office?” “Maybe you didn’t realize you’re the only asian in our office.” (Drawing attention to & thereby amplifying the salience of an ingroup/outgroup distinction.)
“All I’m saying is that girls who wear revealing clothing are singling themselves out for attention from predators!” (Placing blame for a moral harm on a blameless causal link leading to the harm.)
“If he dresses effeminately like that, he’s going to get bullied.” (Ditto; also, status quo bias.)
“A black man will never hold the highest office in this country.” (Self-fulfilling prophesy; failure to acknowledge shittiness of (purported) empirical situation.)
Not lightly, no. But as I was saying to Daniel_Burfoot above, there is just no avoiding the fact that statements, including statements of truth, are speech-acts. They will affect interlocutors’ probability distributions AND their various non-propositional states (emotions, values, mood, self-worth, goals, social comfort level, future actions, sexual confidence, prejudices). Inconvenient as human mind-design is, it’s really hard to suppress that aspect of it.
But there is a big asymmetry here—you (the speaker) know what you mean, so if it really needs to be said, take an extra second to formulate it in the way that has the least perlocutionary disutility.
These are food for thought indeed. My thoughts on some of them, intended as ruminations and not refutations:
I’m not sure what I think about this one. I do note that it would probably be perceived differently by someone who was aware of its truth (this person would certainly be hurt by the reminder of the bad thing), than by someone who was not (i.e. a religious person).
Exploitation of cognitive biases in the audience. Certainly an unethical and underhanded tactic, but note that its effectiveness depends on insufficient sanity in the listeners. Granted, however, that the bar for “sufficient sanity” is relatively high in such matters.
This one is interesting. A tangential thought: have there been studies to determine the power of stereotype threat to affect people who are aware of stereotype threat?
I think I’d have to agree that harping on such a fact would be annoying, at best. I do want to note that one solution I would vehemently oppose would be to forbid such statements from being made at all.
There’s something wrong with your assessment here and I can’t quite put my finger on it. Intuitively it feels like the category of “blame” is being abused, but I have to think more about this one.
The problem here, I think, is that some people use “X is going to happen” with the additional meaning of “X should happen”, often without realizing it; in other words they have the unconscious belief that what does happen is what should happen. Such people often have substantial difficulty even understanding replies like “Yes, X will happen, but it’s not right for X to happen”; they perceive such replies as incoherent. The quoted statement can well be true, and if said by someone who is clear on the distinction between “is” and “ought”, is not, imo, offensive.
See above. Also, there’s a difference between “A black man will never hold the highest office in this country, and therefore I will not vote for Barack Obama” and “A black man will never hold the highest office in this country; this is an empirical prediction I am making, which might be right or wrong, and is separate from what I think the world should be like.”
If I think X will happen (or not happen), it’s important (imo) that I have the ability and right to make that empirical prediction, unimpeded by social norms against offense. If people who are afflicted with status quo bias, or other failures of reasoning, fail to distinguish between “is” and “ought” and in consequence take my prediction to have some sort of normative content — well, it may be flippant to say “that’s their problem”, but the situation definitely falls into the “audience is insufficiently intelligent/sane” category. Saying “this statement is offensive” in such a case is not only wrong, it’s detrimental to open discourse.
I happen to be reading Steven Pinker’s The Blank Slate right now, and he comments on that well-known failing of twentieth-century social sciences, the notion that “we must not even consider empirical claims of inequality in people’s abilities, because that will lead to discrimination”. Aside from the chilling effect this has on, you know, scientific inquiry, there’s also an ethical problem:
If you think that pointing out differences in ability will lead to discrimination, then you must think that it’s not possible to treat people with equal fairness unless they are the same along all relevant dimensions. That’s a fairly clear ethical failing. In other words, if your objection to “some people are less intelligent than other people” is “but then the less intelligent people will be discriminated against!”, you clearly think that it’s not possible to treat people fairly regardless of their intelligence… and if that’s the case, then that is the problem we should be opposing. We shouldn’t say “No no, all people are the same!” We should say, “Yes, people are different. No, that’s not an excuse to treat some people worse.”
Agreed. I just think that branding certain sorts of statements as “offensive” is entirely the wrong way to go about treating this issue with the care it deserves, because of the detrimental effects that approach has on free discourse.
Agreed, and I think this is a special case of the illusion of transparency.
(P.S. Today I learned the word “perlocutionary”. Thank you.)
As an aside, I almost forgot a really good example of the phenomenon of “harmful facts,” which is that the suicide rate in a region goes up whenever a suicide is reported on the news. Indeed, death rates in general go up whenever a suicide is reported, because many suicides are not recognized as such (e.g., somebody steers into oncoming traffic).
For this reason, police tend to hush suicides up (at least, they did in my old hometown & I think it’s widespread).
Maybe, although I strongly suspect religious people alieve that their relatives are gone (otherwise, as others have noted, a funeral would be more like a going-away party).
Good question. Wikipedia turns up this link, which would seem to say “Yes.” So happily, the corrective for this contextually harmful empirical statement is a contextually helpful empirical statement.
Oh yes, certainly. Refusing to notice ingroup/outgroup differences is just the opposite failure mode.
I am still philosophically confused about this issue, although I have been thinking about it for a while. You are probably objecting to the fact that ex hypothesi, less revealing clothing leads to fewer sexual assaults, so why wouldn’t we follow that advice—yes? As I say, I don’t have a full account of that. All I wanted to draw attention to is the ethical questionable-ness of making such a statement without any acknowledgement that one is asking potential victims to change their (blameless) behaviour in order to avoid (blameworthy) assault from others. Compounding the issue is the suspicion that statements like this ALSO tend to be a form of whitewashed slut-shaming.
Yes, in my experience this is very common in muggle society.
Right. The rubric that I try to use in such situations is essentially a consequentialist one. Roughly speaking, the idea is that you should try to predict how your statements might be misinterpreted by a (possibly silly) audience, and if the expected harm of the misinterpretation is significant as compared to the potential benefit of your statement, then reformulate/be silent/narrow your audience/educate your audience about why they shouldn’t misinterpret you. I sympathize, believe me! It’s incredibly annoying to be read uncharitably. But if you know how to prevent an uncharitable/harmful reading, and don’t as a matter of principle because the audience should know better… I think the LW term for that would be “living in the should-universe.”
As it happens, I broadly agree about the term “offensive,” which is an incredibly censorious and abuse-prone word. I think we should try to give better fault assessments than that—and happily, on LW most people usually do.
Would you have similar objections if I advised you to lock your house to reduce theft?
Doesn’t that depend on the context of the advice?
If the context is that you (or others) are telling me that it wasn’t the thief’s fault that they stole my TV, or that the fact that my house was unlocked is evidence that I consented to the taking of my TV, that context may make the advice seem part and parcel of the blame-shifting.
For that matter, the reason to lock your house may well be to avoid being low-hanging fruit — IOW, someone else’s TV gets stolen, not yours; theft is not actually reduced, just shifted around. There’s no guarantee that everyone locking their house would reduce theft. The thieves learn to pick locks and everyone’s costs are higher — but now a person who doesn’t pay that cost is stigmatized as too foolish to protect themselves.
As an old boss of mine used to say, “locks are to keep your friends out.” They work against casual intruders, not committed ones.
That also depends. An insurance company would be well within its rights to charge you a higher premium if you refused to lock your house.
Right — but an insurance company would do that even if it didn’t reduce theft overall, but merely shifted theft away from their insured customers onto others. It could even be negative-sum thanks to the cost of locks. If we actually want to reduce theft overall, shifting it around doesn’t suffice.
The whole point is that this is a strawman.
(Not sure what the point of the rest is—clarification please?)
It’s not. Maybe you’re lucky enough to have never encountered it.
That is, no-one here is arguing for that position. I am well aware that there are people out there who hold all sorts of unjustifiable beliefs, but conflating then with my reasonable claims is logically rude.
One counter-example: In Julia Sweeney’s Letting Go of God (an account of how Bible study eventually led a Catholic to become an atheist) , she says that accepting that there is no afterlife led to her having to mourn all her relatives again.
Perhaps there is something between verbal belief and gut-level alief.
Alternative hypothesis: some religious people are mourning the fact that they will never be able to interact with the person again, not the fact that the person’s mind has been irrevocably destroyed.
What moral theory are you using in the parenthetical comment? For example, according to naive utilitarianism it makes no sense to divide causal links leading to harm into “blameless” and “blameworthy”.
Right, because naive utilitarianism sees ‘blame’ as more or less a category error, since utilitarianism is fundamentally just an action criterion. My own moral system is a bit of a hodgepodge, which I have sometimes called Ethical Pluralism.
As I say to Said below, I don’t have a full theory of blame and causality, although I think about it most every day. But I do think that there is something wrong/incomplete/unbalanced about blaming somebody for being part of a causal chain leading to a bad outcome, even if they are knowingly a part of that chain. For example, Doctor Evil credibly commits to light a school on fire if you don’t give him $10 million. I would consider refusal to pay up in this situation as non-blameworthy, even though it causally leads to a bunch of dead schoolchildren.
You may want to look at various decision theories particularly updateless decision theory and its variants.
The difference between the Dr. Evil example and the revealing clothing example is that if everyone precomits to not negotiating with hostage takers, Dr. Evil wouldn’t even bother with his threat; whereas a precomitment to ignore the presence of sexual predators when deciding what to wear won’t discourage them. The clothing example is in fact similar to the locked house example, I mentioned here.
Yes. I think that all deontological or virtue-ethics rules that actually make sense are actually approximations to rule consequentialism when it’d be too computationally expensive to compute from scratch and/or fudge factors to compensate for systematic errors introduced by our corrupted hardware.
Game theory issues I mentioned (e.g., UDT, the other big one being Schelling points) are not quite the same thing as having bad approximations. Since it’s impossible to have a good approximation of another agent of comparable power, even in principal.
I didn’t mean the approximations are bad. I meant that the ‘fundamental’ morality is rule (i.e. UDT) consequentialism, and the only reason we have to use other stuff is that we don’t have unlimited computational power, much like we use aerodynamics to study airplanes because it’s unfeasible to use quantum field theory for that.
My point is that once you add UDT to consequentialism it becomes very similar to deontology. For example, Kant’s Categorical Imperative can be thought of as a special case of UDT.
UDT doesn’t need to be added to consequentialism, or the reverse. UDT is already based on consequentialist assumptions and any reasonably advanced way of thinking about consequences will result in a decision theory along those lines.
It is only people’s muddled intuitions about UDT and similar reflexive decision theories that makes it seem to them that they are remotely deontological. Particularly those inclined to use UDT as an “excuse” to cooperate when they just want that to be the right thing to do for other reasons.
Better yet, it can be thought of as just not UDT at all.
Why?
You tell me. It’s not my confusion.
From what I infer, people who think deontologically already seem to reason “The most effective decision to make as evaluated by UDT is Cooperate in this situation in which CDT picks Defect. This feels all moral to me. UDT must be on my side. I claim UDT is deontological because we agree regarding this particular issue.” This leads to people saying “Using UDT/TDT reasoning...” in places where UDT doesn’t reason in any such way.
UDT is “deontological” if and only if that deontological system consists of or is equivalent to the rule “It is an ethical duty to behave like a consequentialist implementing UDT”. ie. It just isn’t.
Rather what distinction are you drawing between UDT/TDT-like decision theories and Kant’s CI?
I count rule consequentialism as a flavour of consequentialism, not as a flavour of deontology.
I agree, but I’d argue that UDT is more than standard rule consequentialism.
I’d put it as TDT, UDT etc. being attempts to formalize rule consequentialism rigorously enough for an AI.
I got away with a mild version of that one—A friend’s mother had just died, and I said “This is a world where people die”, and it went over well. However, my friend had been doing meditation seriously for a while.
I actually got hit with a version of that—right before I started college there was an assembly where they handed out papers with correlations between SATS, high school average, and success in college. I had a bad combination with my SATS much better than my GPA. I can remember thinking “Then I might as well give up.”
That wasn’t a sensible thought, but it wasn’t sensible for them to give out those papers without saying something like “and here’s counselling” or “high SAT/low GPA means you need to develop better work habits” or some such.
Aside from the issues you’ve raised, it also implies that there’s nothing to be done, not even martial arts school.