I find being generally known to be unwilling to lie highly useful in many situations. Less than a week ago I spontaneously volunteered a compliment to someone who politely thanked me, only to then double-take and remark that she thought that I wouldn’t have said it if I hadn’t meant it. Consequentialists who think that consequentialists should be able to solve the precommitment problem and be effectively honest nonetheless, in real life, cite my deontological prohibition on lying as a good reason to trust me. I am fairly good at omission, and have successfully avoided outing closeted people of my acquaintance who make that preference known to me, though I never felt the need to go through a similar period myself.
Arbitrary people are not obligated to trust me to handle the truth correctly. If for some reason I’m giving the impression that I’m the equivalent of a Nazi at the door or a homophobic parent, I see no reason from their perspective that they should confess to me these secrets even if I ask. This does not mean that we will be friends if I learn that this has been happening. There are plenty of things people might choose to do for reasonable or even unavoidable reasons that mean we will not be friends.
This post makes me less interested in inviting you over for dinner again. What has to happen in your head for you to be willing to come to my house and eat food I cook and participate in charming conversation and then blithely slash our tires if we ask the wrong question because you think we’re going to become hysterical or behave immorally should we gain access to information or be told that we cannot have it? Why does that sound like a welcoming environment you’d like to visit, with us on such a supposed hair trigger about mere true facts? Why should you sound like a guest I’d prefer when you say this? Whatever it is, I don’t like or want it closer to me. You may make that tradeoff, but imploring the people around you to “accept” others’ “right” to lie to them seems like a kind of fucked-up way to attempt to cheat the tradeoff.
What has to happen in your head for you to be willing to come to my house and eat food I cook and participate in charming conversation and then blithely slash our tires if we ask the wrong question because you think we’re going to become hysterical or behave immorally should we gain access to information or be told that we cannot have it? Why does that sound like a welcoming environment you’d like to visit, with us on such a supposed hair trigger about mere true facts?
There are some communities I consider incredibly welcoming where I don’t imagine by any means that anything I say will be received well just because it’s true. On the other hand, a subculture that not only has idiosyncratic social norms but aggressively shuns anyone who follows mainstream norms, likening violations of their idiosyncratic norms to slashing people’s tires… that sounds incredibly unwelcoming to me.
“Hair trigger about mere true facts” is hyperbole. But the truth is that the overwhelming majority of the human race consists of people who sometimes respond badly to being told “mere true facts.” Insisting you are an exception is quite a brag. It’s possible, but the prior is low. I’d give members of the LessWrong community better odds of being such an exception than I’d grant to most people, but I don’t think every member of the community, or even every prominent member of the community, qualifies. In some cases I think I’ve seen strong evidence to the contrary. (For reasons that should be obvious, please do not ask me to name names.) Because of this, I’m not going to default to treating most members of the LessWrong community radically differently than how I treat non-LessWrongers.
This post makes me less interested in inviting you over for dinner again. What has to happen in your head for you to be willing to come to my house and eat food I cook and participate in charming conversation and then blithely slash our tires if we ask the wrong question because you think we’re going to become hysterical or behave immorally should we gain access to information or be told that we cannot have it? Why does that sound like a welcoming environment you’d like to visit, with us on such a supposed hair trigger about mere true facts?
Not really my business, but a reaction like this may give people an incentive to lie to you.
Not really my business, but a reaction like this may give people an incentive to lie to you.
I think that reaction is walking her talk. She could have changed her preference for inviting him over for dinner silently. Being truthful about her position is an example of being radically honest.
It depends on her reputation for being good at detecting when people lie to her.
If she has a reputation for being good at it and openly makes it known that she punishes people for lying to her, people will less likely lie to her. She only has a problem if people believe that she can’t effectively punish people for lying to her because she doesn’t spot the lies.
It doesn’t make sense to adopt a policy where a person sharing information about what it is like to interact with them must never affect how likely you are to interact with them. If someone tells me they’ve taken up smoking, they have contracted tuberculosis, they have decided that punching people in the arm is affectionate behavior, etc., then it’s kind of them to warn me and they could achieve short-term gains by deceiving me instead until I inevitably notice, but I will not reward the kindness of the warning with my company. The case of lying recurses here where the other examples don’t, but my goal is not, “make sure that people who have a tendency to lie don’t lie about having that tendency”. It’s “don’t hang out with people who are going to lie to me, like, at all”.
This post makes me less interested in inviting you over for dinner again. What has to happen in your head for you to be willing to come to my house and eat food I cook and participate in charming conversation and then blithely slash our tires if we ask the wrong question because you think we’re going to become hysterical or behave immorally should we gain access to information or be told that we cannot have it?
I think it’s a mistake to interpret “I will sometimes do (extreme thing)” as “my threshhold for doing (extreme thing) is low enough that I’d be likely to do it in everyday situations”.
If I visited your house, ate your food, and then you asked me “I want to kill my son by running him over with my car because he told me he’s gay. What’s the best way to do this without being caught by the police?”, depending on circumstances, I might slash your tires, or do things that cause as much damage to you as slashing your tires.
So if you asked me if I would slash your tires if you told me something bad, I’d have to say “yes”. But it doesn’t mean that if you invited me to your house you would have to watch what you say to me in fear that I might slash your tires, because the kinds of things that would lead me to do that would also imply that you’re seriously messed up. Nobody would just say those things by accident.
I see this fallacy a lot in rational idea discussions.,
to come to my house and eat food I cook and participate in charming conversation and then blithely slash our tires if we ask the wrong question
It seems like this is an example of my new favorite conversational failure mode: trying to map an abstraction onto the reference class of your personal experience, getting a strange result, and getting upset instead of curious.
ChrisHallquist said there are some circumstances in which he feels compelled to lie. It seems like Alicorn assumed both that this must include some circumstances she’d be likely to subject him to, and that what he thinks of as a lie in that circumstance is something that will fall into the category she objects to. Of course, either of those things or both could be true—but the way to find out is to consider concrete examples (whether real or fictional).
Personally I used to make this mistake a lot when women complained (in vague abstract terms) about being approached by strangers in coffeeshops, and talk about how they’re not obligated to be polite or nice in those cases. Once I got curious and asked questions, and found out that “approached” meant a guy persistently tried to engage her in conversation with no affirmative encouragement from her, and “not polite” didn’t mean “fuck off and die, asshole” but just failing to throw a lot of warmth and smiling into the conversation, it made perfect sense, though I was surprised that it wasn’t already obvious to everyone that no such obligation exists.
I really really like this comment. I really want more clarification now. But from my perspective, someone who has a categorical rule against lying is like learning I’m being graded on everything I say. I suddenly have the massive cognitive burden of making sure everything I say is true and that I mean all the implications or I can suddenly be shunned and outcast.
Walking is not a cognitive burden. Walking on a tightrope is. Being able to say whatever I feel like saying without having to analyze it constantly for punishment is the equivalent of simple walking. I may tell the truth in 90-99 percent of the statements I make, but when I get put into a context of punishment, suddenly I have to worry about the consequences of making what would otherwise be a very small step away from the straight and narrow.
Well, I feel like I’m walking on a tightrope much less when I’m allowed to be honest about everything than when I feel like there are things I’d be supposed to lie about.
yes of course. Someone asks how I’m doing. I’m having a terrible day but say fine because I don’t want to talk about it. Is this example clear enough for you?
That statement only makes the web of lies/things that technically don’t count as lies I have to keep in my head to stay on Alicorn’s good side even more complicated.
I’m not that complicated and I’d rather you didn’t pin the entire intricacy of socialization on me personally. I’m okay with phatics like “fine”, but if you’re actually talking to me, specifically, I’ll also take “enh” or other non-information as a sign not to pursue the conversation as long as I’m reasonably on the ball and you can also tell me “I’d rather not talk about that”.
You’re aware I did not invent the tire slashing metaphor, right? You seem to be reacting very strongly and specifically to it. I linked a source the first time I used it here.
It seems more like the opposite to me. Telling the truth involves keeping track of what is going on in my head, but lying involves keeping track of what is going on in my head and keeping track of what appears to be going on in my head (and making sure they aren’t identical).
This has been my experience as well. Telling the truth requires just saying what’s on your mind, sometimes adjusting to avoid making people mad or to be better understood. Lying requires a lot of effort and is stressful.
This is often true, but often the opposite is true. If telling the truth requires extensive evaluation of actual facts, but lying just requires figuring out is the best thing to say, then lying can be less stressful.
As used here, “lying” means “intentional deception”, so if you say something, believing it to be true, but it’s actually false, it’s not lying. The contrast is not saying what’s true vs saying what’s false, but saying what you believe to be true vs saying what you believe to be false.
But from my perspective, someone who has a categorical rule against lying is like learning
I think there’s something missing there.
If someone were to put me in imminent fear for my life, I would feel justified in killing them. Now that you know that, would you be able to spend time with me without a massive cognitive burden of making sure that you don’t put me in imminent fear for my life?
And it’s not even like Chris is saying he’d kill anyone. He didn’t say “shunned and outcast”. He’d just lie to them. You consider being lied to such a horrifying prospect that you would devote massive cognitive resources to making sure it didn’t happen?
To be fair, the sentence he’s quoting is ungrammatical or at least weirdly phrased (“person is like learning”, I had to read that twice), and that may make it more confusing.
Fairness has nothing to do with whether someone is able to accurately read what someone else means.
When being faced with weirdly phrased writing in most cases the effective thing is to simply ignore the point or be open about the fact that you don’t understand what someone means and if you care about understanding it, ask for clarification.
And confusion sometimes takes the unfortunate shape of someone thinking they understood and not realizing that they didn’t—they can’t ask to clarify then, can they? Since I believe that, purely as a matter of cause and effect, avoiding poorly formed sentences leads to this happening less often (even in cases when after the fact we would blame the reader more than the writer) I offered that remark as possibly helpful, that’s all.
Since I believe that, purely as a matter of cause and effect, avoiding poorly formed sentences leads to this happening less often
Do you really believe that someone doesn’t already know that avoiding poorly formed sentences improves understanding of messages? If you don’t then why do you consider it worth saying?
Do you really believe that someone doesn’t already know that avoiding poorly formed sentences improves understanding of messages?
Not really, but then again I’m not sure why you started arguing with me after I gave drethelin feedback on his poorly formed sentence, which he might have not been aware of. So I endevored to explain to you as clearly as I could why I did that. What are you trying to do here exactly?
You made a point about fairness and I argument that you are wrong to speak about fairness.
This happens in the context of a post by ThisSpaceAvailable. ThisSpaceAvailable lately wrote a post largely complaining that he isn’t treated fairly. In that context it’s worth noting that, local community standards are not about treating other people fairly but about promoting conversations that have utility.
Fairness is a very real concept in which some people believe. The fact that you use the word when you don’t want to talk about fairness is a mistake on your part worth pointing out.
You know, unfortunately I’m so much worse at not lying to myself than at not lying to others. (Then again, I’ve found a way to put this to a good use: if promising myself I won’t eat junk food from the vending machines doesn’t work, I promise that to my girlfriend instead. See also Beeminder. Yvain’s “fictional deities” approach also sounds interesting.)
Has his post offended you or something? You employ pretty strong language, and “this post makes me less interested in inviting you over for dinner again” is a kinda public way of breaking off a friendship, which (regardless of cause) is somewhat socially humiliating for the person on the receiving end. Is that really necessary? Settle such personal details via PM?
I don’t see it as a sort of grey fallacy argument to note that “lying” isn’t much of a binary property (i.e., either you lie, or you don’t). There may be simple enough definitions on the surface level, but when considering our various facets of personality, playing different roles to different people in different social settings, context-sensitivity and so on and so forth, insisting on anything remotely like being able to clearly (or at all) and reliably distinguish between “omitting a truth” and “explicitly lying” versus “telling the truth” loses its tenability. There are just too many confounders; nuances of framing, word choice, blurred lines between honesty and courtesy, the list goes on.
Yes, there are cases in which you can clearly think to yourself that “saying this or that would be a lie”, but I see those as fringe cases. Consider your in-laws asking you whether the soup is too salty. Or advertising. Or your boss asking you how you like your new office. Or telling a child about some natural phenomenon. The whole concept on Wittgenstein’s ladder (“lies to children”) would be simplistically denounced as “lying” in an absolute framework.
“Hair trigger about mere true facts” is disregarding all these shades of “lies” (disparity between internal beliefs and stated beliefs), there are few statements outside of stating mathematical facts for which a total, congruent correspondence between “what I actually believe” and “what I state to believe” can be asserted. Simply because it’s actually extremely hard to express a belief accurately.
Consider you were asked in a public setting whether you’ve ever fantasized about killing someone. Asked in an insistent manner. Dodge this!
Consider you were asked in a public setting whether you’ve ever fantasized about killing someone. Asked in an insistent manner. Dodge this!
Why is this is problem? I’m not Alicorn but I wouldn’t have any issues admitting in public that yes, I’ve fantasized about killing someone. And the situation is very easy to steer towards absurd/ridiculous if the asker starts to demand grisly details :-)
Well, “asked in an insistent manner” does seem to count as evidence that there’s some ulterior reasoning behind the question. Ordinarily I expect a lot of people (though maybe not most people) would be happy to admit that they’ve e.g. fantasized about running over Justin Bieber or whoever their least favorite pop star is with a tank, but I for one would be a lot more inclined to dodge the question or lie outright if my conversational partner seemed a little too interested in the answer.
I for one would be a lot more inclined to dodge the question or lie outright if my conversational partner seemed a little too interested in the answer.
If the conversational partner seems too interested, I’m likely to start inquiring about his/her fantasies… :-D
Heh. Dunno. Many of these other people (vaguely waves towards society) like to insist they wouldn’t. Not even while they’re in the bathroom, you know, producing rainbows. Makes it a good example.
Consider you were asked in a public setting whether you’ve ever fantasized about killing someone. Asked in an insistent manner. Dodge this!
The easiest way is to go meta. Ask the other person why they asked the question. If a person asks a question that’s inappropriate to ask in public you can put the burden to come up with a good answer on them.
It’s generally high status behavior not to directly answer question whether you engaged in bad activity X but punish the person who asks the question for asserting that you might be a person who engages in bad activity X but making them justify their bad faith in yourself.
It upset me. I don’t like to see lying defended. I would react about the same way to an equally cogent “Defense of Pickpocketing” or “Defense of Throwing Paint On People”, though I imagine those would be much more difficult to construct.
I think there should be negative social consequences to announcing one’s willingness to lie and that there should be significant backlash to issuing a public request that people put up with it.
I think you’re exaggerating the difficulty both of identifying lies and of omitting/deflecting.
Consider you were asked in a public setting whether you’ve ever fantasized about killing someone. Asked in an insistent manner. Dodge this!
“I think about killing my characters off pretty regularly, though often I come up with more creative things to do instead. As far as I know I’m an average amount of susceptible to intrusive thoughts, if that’s what you’re asking, but why are you asking?”
Or if I don’t even trust them with that answer I can just stare at them in silence.
Thanks for telling the truth. But downvoted for “I dislike this position, don’t want to hear it defended, and will punish those who defend it.” This is a much stronger rationalist anathema than white lies to me.
This is a forum for discussing ideas, it’s not a forum for playing social games. (I’m saying this as someone who is extremely reluctant about white lies and who hates the idea that they are socially expected to lie. Asking a question when one doesn’t want an honest answer is just silly.)
Except when acting offended and/or hurt signals solidarity and prompts your allies to attack the alien who got the shibboleth wrong. (You can argue that that’s evil, of course, but then you’re trying to break away from some very, very deeply ingrained instincts for coalition politics.)
I think that’s covered by “alternatively, evil”. ;) More seriously, though: how is “knowing what the preferred answer is and either agreeing with it or being willing to lie” a reasonable criterion by which to filter your group?
how is “knowing what the preferred answer is and either agreeing with it or being willing to lie” a reasonable criterion by which to filter your group?
It proves that you value loyalty to your group more than you value your own capacity to reason, which means that authoritarian leaders don’t have to consider you a threat (and thus destroy you and everything you hold dear) if they order you to do something against your self-interest. Thus, perversely, when you’re in an environment where power has already concentrated, it can be in your self-interest to signal that you’re willing to disregard your self-interest, even to the point of disregarding your capacity to determine your self-interest.
Once ingrained, this pattern can continue even if those authoritarian leaders lose their capacity to destroy you—and perversely, the pattern itself can remain as the sole threat capable of destroying you if you dissent.
(Put a few layers of genteel classism over the authoritarian leadership, and it doesn’t even have to look autocratic in the first place.)
Definitely covered by “alternatively, evil”. Especially when considering a two-person relationship!
My problem with calling these behaviors “evil” is that they don’t have to be consciously decided upon—they’re just ways that happened to keep our ancestors alive in brutal political environments. Cognitive biases and natural political tendencies may be tragic, but calling them “evil” implies a level of culpability that I think isn’t really warranted.
The choice of words was a bit tongue-in-cheek, but enforcing your power over others in this way is definitely not a nice thing to do. And holding people responsible for such disingenuous behaviour only when they consciously deliberate and decide on it doesn’t seem to be very useful to me. People rarely consciously deliberate and decide upon being assholes. (And if someone does what you described in a two-person relationship, I am very inclined to call them an asshole, at least in my head.)
I wonder if people who have a disadvantaged native social circuitry are more likely to judge other people because their success in social situations requires more conscious deliberation and thus they’re expecting more of it from others.
I don’t know; I’m something of a counterexample to that, and I tend to not associate with other socially disadvantaged people, so I don’t have a good reference class to build examples from.
Who gets to decide what’s a social game? Attacking people when they’re perceived to be playing social games seems like a social game to me. It’s the nature of many social games that they employ plausible deniability, which leads to a lot of false positives and hostility if you attack all of the potential threats.
I think it’s worth distinguishing between punishing discourse in general and personal social consequences. Chris, the OP, has literally been physically in my house before and now I have learned that he endorses a personal social habit that I find repellent. I’m not trying to drive him out of Less Wrong because I don’t like his ideas—I didn’t even downvote the OP! - but it seems weird that you feel entitled to pass judgment on the criteria I have for who is welcome to be in my house.
it seems weird that you feel entitled to pass judgment on the criteria I have for who is welcome to be in my house.
Edit: separated these two quotes. LessWrong comment formatting stuck them together.
I think there should be negative social consequences to announcing one’s willingness to lie and that there should be significant backlash to issuing a public request that people put up with it.
I don’t care whether you let him in your house. You’ve publicly shamed him, and you are saying that this kind of status-attack is the just response to a particular argument, regardless of how it’s presented. You also seem to be vilifying me and dodging my complaint by portraying my judgement as against your home-invitation policy, rather than against your public-backlash policy, which I resent as well.
“Vilifying you”? Because I didn’t understand the thrust of your criticism because you didn’t understand the point of my post? I’m tapping out, this is excessive escalation.
“I think about killing my characters off pretty regularly, though often I come up with more creative things to do instead. As far as I know I’m an average amount of susceptible to intrusive thoughts, if that’s what you’re asking, but why are you asking?”
(In the role of a hypothetical interlocutor)
“See this here?” (Pulls out his Asperger’s Club Card) “I have trouble distinguishing what’s socially acceptable to ask from what isn’t, and since you’re such a welcoming host, I hope you also welcome my honest curiosity. I wouldn’t want to lie—or suppress the truth—about which topic interests me right this moment.
As for the reason for my interest, you see, I’m checking whether your deontological barrier against lying can withstand the social inconvenience of (ironically) telling the truth about a phenomenon (fantasizing about killing someone) which is wildly common, but just as wildly lied about.
Your question answered, allow me to make sure I understood you correctly: My question was referring to actual people. Have I inferred correctly that you did in fact fantasize about killing living people (non-fictional) on multiple occasions?”
ETA:
Or if I don’t even trust them with that answer I can just stare at them in silence.
I see. Unfortunately, unlike “pleading the fifth”, not answering when one answer is compromising is kinda giving the answer away. The symmetrical answering policy you’d have to employ in which you stare in silence regardless of whether the answer would be “yes” or “no” is somewhat hard to sell (especially knowing that silence in such a case is typically interpreted as an answer*). Unless you like to stare in silence, like, a lot. And are known to do so.
* “Do you love me?”—silence, also cf. Paul Watzlawick’s “You cannot not communicate.”
You or your character or both have confused “not lying” with “answering all questions put to one”. And for that matter “inviting people who ask rude questions indiscriminately to parties in the first place”.
I’d hoped I addressed this in the edit, “cannot not communicate” and such.
You may find yourself in situations (not at your parties, of course) in which you can’t sidestep a question, or in which attempts to sidestep a question (ETA: or doing the silent stare) will correctly be assumed to answer the original question by the astute observer (“Do you believe our relationship has a future?”—“Oh look, the weather!”).
Given your apparently strong taboo against lying, I was wondering how you’d deal with such a situation (other than fighting the hypothetical by saying “I won’t be in such a situation”).
Questions I really can’t sidestep are usually ones from people who, for reasons, I have chosen to allow to become deeply entangled in my life. If one of my boyfriends or my fiancé decides to ask me if our relationship has a future I will tell him in considerable and thoughtful detail where I’m at on that topic, and because I choose to date reasonable human beings, this will not be an intolerable disaster. Occasionally if I’m really wedged (at a family holiday gathering, parent asks me something intrusive, won’t back off if I say it’s none of their business) I can solve the problem by deliberately picking a fight, which is usually sufficient distraction until I am not in their physical presence and can react by selectively ignoring lines in emails, but I don’t like doing that.
I don’t stare at people in silence a lot, but I do often give the visual appearance of wandering attention, and often fail to do audio processing such that I do not understand what people have said. Simply not completing the steps of refocusing my overt attention and asking people to repeat themselves can often serve the purpose when it’s not someone I have chosen to allow to become deeply entangled in my life; if we’re the only people in the room it works less well, but if I know a person well I’ll only be in a room alone with them if I trust them yea far, and if I don’t know them well and they start asking me weird questions I will stare at them incredulously even if the answer is in fact completely innocuous (“Have you ever committed grand theft auto?”; “are you a reptilian humanoid?”).
It upset me. I don’t like to see lying defended. I would react about the same way to an equally cogent “Defense of Pickpocketing” or “Defense of Throwing Paint On People”, though I imagine those would be much more difficult to construct.
I knew you were a deontologist (I am a cosequentialist), but I had sort of assumed implicitly that our moralities would line up pretty well in non-extreme situations. I realized after reading this how thoroughly alien your morality is to me. You would respond with outrage and hurt if you discovered that someone had written a defense of throwing paint on people? Or pickpocketing? Although I have never practiced either of those activities and do not plan to ever do so, my reaction is totally different.
Pickpocketing is a perfectly practical technique which, like lockpicking, might be used for unsavory purposes by shortsighted or malicious people, but is probably worth knowing how to do and makes a great party trick. And throwing paint on people? Hilarious. It’s not a terribly nice thing to do, especially if the person is wearing nice clothes or is emotionally fragile, but I think most people who can compose a cogent philosophical essay can also target their prankstering semi-competently.
Pickpocketing-as-theft is to lying-in-general as pickpocketing-as-consensual-performance-art is to, say, storytelling, I suppose I should clarify. I think we legitimately disagree about throwing paint on people unless you are being facetious.
In terms of pickpocketing, I agree that we seem to pretty much agree; I think that pickpocketing for the purposes of stealing what doesn’t belong to you is rarely justified. I was not being facetious about the paint part, though.
A more realistic example would be something like “In Defense of Taxation to Fund the Welfare State”—which would be different from “In Defense of Lying”, because even if I think that taxation to fund the welfare state is immoral, I don’t think that someone who holds the opposite position is likely to hold me at gunpoint and demand that I give money to a beggar, but if someone who thinks lying is okay to the degree that OP does, there is a real risk of them lying to me in personal life. More generally, advocating something bad in the abstract isn’t as bad as advocating something bad that I’m likely to experience personally.
even if I think that taxation to fund the welfare state is immoral, I don’t think that someone who holds the opposite position is likely to hold me at gunpoint and demand that I give money to a beggar
You should try not paying your taxes on the grounds that you don’t want to support the welfare state. If you persist, I’m quite sure at some point men with guns will show up at your doorstep.
Other than that he probably votes for people who pass laws telling you how much of your money will be taken “for the beggars” and who have no problems sending men with guns to enforce their commands.
He only has one vote out of the many necessary to send men with guns after me. Even if he changed his mind and voted against the welfare state, the probability that anything would change is minuscule. The expected harm from him voting for the welfare state is smaller than that of him sitting next to me after not showering for a couple of days.
But if the pool of voters were much smaller, I’d take a more negative view of his actions.
You lost me there so hard that I am wondering if we’re talking about the same thing—throwing paint at people doesn’t seem to happen in my corner of the world and I’ve never known anyone who got paint thrown at them, so maybe I’m misunderstanding something. So, to be sure, are we talking here about throwing paint, as in the stuff you paint walls with, at people, ruining their clothes, pissing them off, interrupting their day to get washed and changed and all? Is that what you find funny and defensible?
The issue is not so much about whether the practice itself is usually done in a defensible manner but that writing an article to play devils advocate to make the case of throwing paint at people isn’t an immoral act
Then I happen to be asking a separate question that isn’t about “the issue”. The paragraph I am responding to is talking about the practice of throwing paint, not about the practice of writing articles about it.
Nobody here defends the practice of throwing paint.
But if you wanted me to, then I would say that it’s preferable to throwing stones at other people. You still make your political point by throwing paint at policeman but you are causing less lasting damage. Convincing those people on the left who have a habit of throwing stones at policemen in political demonstrations to instead throw paint would cause less lasting injuries.
You have even higher returns in utility if you could convince a group like Hamas to throw paint instead of using nail bombs.
Nobody here defends the practice of throwing paint.
No?
And throwing paint on people? Hilarious. It’s not a terribly nice thing to do, especially if the person is wearing nice clothes or is emotionally fragile, but [...]
Sounds to me like that means “throwing paint is extremely funny and pretty much OK”.
Sounds to me like that means “throwing paint is extremely funny and pretty much OK”.
The point of the paragraph is to show that it’s possible to play devils advocate in this case. Also a bit about having fun playing devil’s advocate. Joking. Not long ago a fellow member on LW joked about committing bioterrorism. Distinguishing in what intent something is written is important.
Saying “It’s not a terribly nice thing” labels the action as a hostile action. That means you only do it if you actually want to engage in a hostile action against someone else. Given various choices of hostile actions it’s not clear that throwing paint is a bad choice.
That Vulture’s paragraph could be read that way has occurred to me, but it is far from obvious (you’ll note that my original post here is a request for confirmation that I am reading things correctly). I’ve met people with opinions like that before—not on throwing paint, because again, it’s something I’m unfamiliar with, but on other ways to be a jackass.
But it doesn’t matter. Even if you were correct about that, then if we’re discussing the possibility of Alicorn’s or anyone’s outraged/upset reaction to a defense of throwing paint, this only makes sense if this a defense possible to be taken seriously, to elicit a serious reaction. And not something as silly as “you should prefer it to throwing nail bombs”, which deserves only a shrug. So, either way, I felt compelled to assume Vulture was saying something I’m supposed to be able to follow without suspending all common sense.
Even if you were correct about that, then if we’re discussing the possibility of Alicorn’s or anyone’s outraged/upset reaction to a defense of throwing paint, this only makes sense if this a defense possible to be taken seriously, to elicit a serious reaction.
I do think that Alicorn follows a policy of being offended when people to engage in serious efforts to play devils advocate for positions that she considers to be immoral.
Playing devils advocate for extreme immoral positions is something that some people can see as a game. If you go to the world debating championships than you might get a topic to argue that there should be more genocide.
For debating folks making such an argument is a fun game of being intellectually detached from the position that one argues. There are other people who don’t think that there use in someone producing the best defense of genocide that’s possible to produce.
So, either way, I felt compelled to assume Vulture was saying something I’m supposed to be able to follow without suspending all common sense.
It’s possible to win debating tournaments where judges look at whether the participants make rational arguments while advocating positions that are very immoral. It doesn’t take suspending common sense to make an argument that not enough people throw paint at other people. It just takes intellectual detachment.
I think behaviorly I act almost exactly as you do in terms of trying never to lie but often to evade questions. But for some reason the comment I’m responding to rubs me incredibly negatively. I’m reflecting on why, and I think the difference is that you actually have it easy. You’re trying to live radically honestly in, if I’m not mistaken, the middle of an enclave that has far more of the sort of people that would appreciate Lesswrong in your immediate vicinity than most people do. So you can basically choose to be extremely choosy about your friends in this regard.
Try holding everyone around to the same standard you live by when most of your neighbors and colleagues are not associated with the rationalist movement at all, and let’s see how far you get. Let me tell ya, it’s a wee bit harder. For most of us, “be lenient with others and strict with thyself” is a pretty natural default.
I suspect, from Chris’ perspective, if his choices are “be invited to Alicorn’s parties” and “be friends with other people at all,” he may go with the latter.
I believed lying was wrong during times of my life when I didn’t live in a rationalist enclave, too. Curating your friends is easier when you are willing to maintain friendships online. Dinner parties are a luxury I am happy to avail myself of, that’s all.
I grew up in rural Oklahoma, in the “buckle of the Bible Belt”, where anti-intellectualism ran rampant. I was radically honest then (not in the literal sense of “radical honesty”, but in the sense of what Alicorn seems to be advocating), and it didn’t make me very popular, being an atheist, a consequentialist, a transhumanist, and increasingly a libertarian. It didn’t make me very popular—but lying would have been much, much worse. Telling the truth merely made those people dislike me, but lying would have made me compromise my integrity.
“Those who mind don’t matter, and those who matter don’t mind.”
Which totally misses the point of the comment you’re responding to. This isn’t about whether we are radically honest. It’s about whether we insist on everyone we associate with also being radically honest as a condition of our association with them.
That’s a good point. I personally require people I associated with to be honest (except when their lives or livelihoods are at stake), as I hate being lied to. How people respond to this is up to them.
My instant urge when you compared polite lies to slashing your tires is to insult you at length. I don’t think this would be pleasant for anyone involved. Radical Honesty is bad for brains running on human substrate.
I do not and have never endorsed indiscriminate braindumping.
I advocate refraining from taking actions that qualify as “lying”. Lying does not include, among other things: following Gricean conversational maxims, storytelling, sarcasm, mutually-understood simplification, omission, being choosy about conversational topics, and keeping your mouth shut for any reason as an alternative to any utterance.
There is no case where merely refraining from lying would oblige you to insult me at length. I don’t know why everyone is reading me as requiring indiscriminate braindumping.
An emotional response to your statement is not indiscriminate braindumping. I’m not talking about always saying whatever happens to be in my mind at any time. Since I’ve probably already compromised any chance of going to a rationalist dinner party by being in favor of polite lies, I might as well elaborate: I think your policy is insanely idealistic. I think less of you for having it. But I don’t think enough less of you not to want to be around you and I think it’s very likely plenty of people you hang out with lie all the time in the style of the top level post and just don’t talk to you about it. We know that humans are moist robots and react to stimuli. We know the placebo effect exists. We know people can fake confidence and smiles and turn them real. But consequentialist arguments in favor of untruths don’t work on a deontologist. I guess mostly I’m irate at the idea that social circles I want to move in can or should be policed by your absurdity.
I don’t think the above constitutes an indiscriminate braindump but I don’t think it would be good to say to anyone face to face and I don’t actually feel confident it’s good to say online.
In particular, outright denouncement of ordinary social norms of the sort used by (and wired into) most flesh people, and endorsement of an alternative system involving much more mental exhaustion for the likes of people like me, feels so much like defecting that I would avoid interacting with any person signalling such opinions.
Incidentally (well after this thread has sort of petered out) I feel the same sort of skepticism or perhaps unenthusiasm about Tell Culture. My summarized thought which applied to both that and this would be, “Yes, neat idea for a science fiction story, but that’s not how humans work.”
Depending on the context, lies of omission can be as bad as, if not worse than, blatant lies (due to being all the more convincing).
Imagine that I ask you, “did you kill your neighbour ?”, and you answer “no”. The next week, it is discovered that you hired a hitman to kill your neighbour for you. Technically, you didn’t lie… except by omission.
Personally, I’d categorize putting a hit on somebody as killing them, but if you really, sincerely didn’t think of the words as meaning that, and I asked you that question, and you told me ‘no’, then I wouldn’t add lying to your list of crimes (but you’d already be behaving pretty badly).
The thing I’m measuring here is not, actually, the distance traveled in the audience towards or away from omniscience. It’s something else.
The thing I’m measuring here is not, actually, the distance traveled in the audience towards or away from omniscience. It’s something else.
Something perplexes me about the view you describe, and it’s this:
What is the point?
That is to say: You say lying is bad. You describe a certain, specifically circumscribed, view of what does and does not count as lying. The set of conditions and properties that define lying (which is bad) vs. things that don’t count as lies, in your view, are not obvious to others (as evidenced by this thread and other similar ones), though of course it does seem that you yourself have a clear idea of what counts as what.
So my question is: what is the point of defining this specific set of things as “lying, which is bad”? Or, to put it another way: what’s the unifying principle? What is the rule that generated this distribution? What’s the underlying function?
What kind of scope of omission are you looking for here? If someone asks “what are you up to today?” or “what do you think of my painting?” I can pick any random thing that I really did do today or any thing I really do think of their painting and say that. “Wrote a section of a book” rather than a complete list, “I like the color palette on the background” rather than ”...and I hate everything else about it”.
Also, not speaking never counts as lying. (Stopping mid-utterance might, depending on the utterance, again with a caveat for sincere mistake of some kind. No tricks with “mental reservation”.)
If someone asks “what are you up to today?” or “what do you think of my painting?”...
Ok, that makes sense. But still, from my perspective, it still sounds like you’re lying; at least, in the second example.
I don’t see the any difference between saying, “I think your painting is great !”; and saying something you honestly expect your interlocutor to interpret in the same way, whereas the literal meaning of the words is quite different. In fact, I’d argue that the second option involves twice the lies.
Also, not speaking never counts as lying.
What, never ? Never is a long time, you know. What if your friend asks you, “let me know if any of these paintings suck”, and you say nothing, knowing that all of them pretty much suck ?
I would understand it if your policy was something like, “white lies are ok as long as refusing to engage in the would cause more harm in the long run”; but, as far as I can tell, your policy is “white lies are always (plus or minus epsilon) bad”, so I’m not sure how you can reconcile it with the above.
If your friend asks you to serve as a painting-reviewer and you say you will and then you don’t, that’s probably breach of promise. If your friend asks you to do them this service and you stare blankly at them and never do it, you’re probably being kind of a jerk (it’d be nicer to say “I’m not gonna do that” or something) but you are not lying.
I understand your point, but I still do not understand the motivation behind it. Are you following some sort of a consequentialist morality, or a deontological one that states “overt lies are bad, lies of omission are fine”, or something else ?
As I see it, if a friend asks you “do you like this painting ?” and you reply with “the background color is nice”, the top most likely outcomes are:
The friend interprets your response as saying, “yes I like the painting”, as was your intent. In this case, you may not have lied overtly, but you deceived your friend exactly as much.
The friend interprets your response as saying, “no, I didn’t like the painting but I’m too polite to say so”. In this case, you haven’t exactly lied, but you communicated the same thing to your friend as you would’ve done with a plain “no”.
The friend interprets your response as in (1), with an added ”...and also I don’t think you’re smart enough to figure out what I really think”. This is worse than (1).
Similarly, if your friend asks you to review his paintings and you refuse, you’d better have a good reason for refusal (i.e., the truth or some white lie); otherwise, anyone of average intelligence will interpret your response as saying “I hate your paintings but I won’t tell you about it”.
None of what I wrote above matters if you only care about following prescribed rules, as opposed to caring about the effects your actions have on people. Perhaps this is the case ? If so, what are the rules, and how did you come by them ?
I’m Less Wrong’s token deontologist. I thought most people around here knew that. I wrote this article about it and my personal brand of deontology is detailed in this comment.
Sorry, I did not, in fact, know that; and most people here are consequentialists, so I assumed you were one as well. I’d skimmed your post on deontology that you linked to earlier, but I did not understand that it was meant to represent your actual position (as opposed to merely being educational).
As I said above, if your moral system simply has a rule that states “lying is bad except by omission”, or something similar, then none of my points are valid, so you are right and I was wrong, my apologies.
That said, personally, I don’t think that deontology makes any sense except possibly as a set of heuristics for some other moral system. That’s a different line of debate however, and I won’t push it on you (unless you are actually interested in pursuing it).
I’m willing to answer questions about it if you’re curious, but since I dropped out of grad school I haven’t devoted much time to refining either my ethical theory or my ability to explain it so the old article will probably be just about as good. I get enough debating in just from hanging out with consequentialists all the time :P
As I understand it, deontological systems are, at the core, based on lists of immutable rules.
Where do the rules come from ? For example, one rule that comes up pretty often is something like, “people have inalienable rights, especially the right to A, B and C”. How do you know that people have rights; what makes those rights inalienable; and what makes you so sure that A, B and C are on the list, whereas X, Y and Z are not ?
I think that rights drop naturally out of personhood. Being a person is to be the kind of thing that has rights (and the obligation to respect same). The rights are slightly alienable via forfeiture or waiver, though.
I don’t quite understand what you mean. Even if we can agree on what “personhood” means (and I’ve argued extensively with people on the topic, so it’s possible that we won’t agree), what does it mean for a right to “drop out naturally” out personhood ? I don’t understand this process at all, nor do I understand the epistemology—how do you determine exactly which rights “drop out naturally”, and which ones do not ?
To use a trivial example, most deontologists would probably agree that something like “the right to not be arbitrarily killed by another person” should be on the list of rights that each person has. Most deontologists would probably also agree that something like “the right to possess three violet-blue glass marbles, each exactly 1cm in diameter” should not be on the list. But why ?
Likewise. For what it’s worth, though, I don’t actually think there is a good answer to the epistemological questions you asked; that’s one of the reasons I favor consequentialism rather than deontology. Of course, I imagine Alicorn’s views on the matter differ, so I, too, would like to see her answer (or that of any other deontologist who cares to respond).
Well, no. Utilitarian systems are based on a utility function (although I’m not aware of any requirement that it be immutable… actually, what do you mean by “immutable”, exactly?). Consequentialist systems don’t have to be utilitarian.
Even so, the origin of a utility function is not that mysterious. If your preferences adhere to the von Neumann-Morgenstern axioms, then you can construct a utility function (up to positive affine transformation, as I understand it) from your preferences. In general, the idea is that we have some existing values or preferences, and we somehow assign utility values to things (“things”: events? world states? outcomes? something) by deriving them from our existing preferences/values. It’s not a trivial process, by any means, but ultimately the source here is the contents of our own brains.
That’s a valid question, and, admittedly, there’s no good answer that I’m aware of. One might say that, ultimately, the function can be derived from some basic principle like “seek pleasure, avoid pain”, but there’s no objective reason why anyone should follow that principle, as opposed to, say, “seek paperclips, avoid non-paperclips”.
I will grant you that both consequentialism and deontology are based on some a priori assumptions; however, I would argue that the fact that consequentialism is based on fewer such assumptions, as well as its flexibility in the face of new evidence, make consequentialism a more efficient moral system—given that we humans are agents who are reasoning under uncertainty using a comparatively limited amount of data.
however, I would argue that the fact that consequentialism is based on fewer such assumptions
I would argue that this “fact” is not in fact true, or at least not obvious. It’s not even clear to me what the content of that claim is supposed to be. If you mean that it takes fewer bits to encode a utility function then a collection of maxims, then this will obviously depend on which utility function or set of maxims is used, also as Eliezer points out here this is a really really bad way to compare moral systems.
as well as its flexibility in the face of new evidence,
Huh? If you’re claim is that consequentialism is more flexible in the face of new evidence then deontology, you’re going to have to provide some justification for it (as well as specifying precisely what you mean). As I see it, both are inflexible in the sense that ideal agents of either type are incapable of changing their utility function or set of maxims in the face of any evidence, and flexible in the sense that they can use evidence to determine how to pursue their maxims or maximize their utility function, and also in the sense that actual humans will in fact update their maxims or utility function in the face of evidence.
If you mean that it takes fewer bits to encode a utility function then a collection of maxims...
Not necessarily. You are correct in saying that any given arbitrary utility function can be a lot more complex than any given arbitrary set of rules; so strictly speaking I was wrong. However, in practice, we are not dealing with arbitrary functions or rules; we are dealing with limited subsets of functions/rules which are capable of sustaining a human society similar to ours in at least some way. Of course, other functions and other rules can exist, but IMO a moral system that effectively commands its followers to e.g. kill themselves ASAP is not very interesting.
Given this restriction, I believe that consequentialist moral systems which satisfy it will require fewer arbitrary assumptions, in part due to the following:
Huh? If you’re claim is that consequentialism is more flexible in the face of new evidence then deontology, you’re going to have to provide some justification for it … As I see it, both are inflexible in the sense that ideal agents of either type are incapable of changing their utility function or set of maxims in the face of any evidence...
Changing the maxims is exactly the problem. Given that deontological maxims are essentially arbitrary; and given that the space of all possible human behaviors is quite large; it is already pretty difficult to construct a set of maxims that will account for all relevant behaviors that are currently possible. Of course, you could always create a maxim that amounts to saying, “maximize this specific utility function”, but then you’re just reducing deontology to consequentialism.
In addition, though, as humans acquire more knowledge of and more power over their environment, the set of possible behaviors keeps changing (usually, by increasing in size). This presents a problem for the deontologist, who has to invent new maxims just to keep up (as well as convincing others to use the new maxims which, as you recall, are entirely arbitrary), as well as to possibly revise existing maxims (ditto). The consequentialist, on the other hand, can apply his existing utility function to the new behaviors, or plug the new data into it, in order to come up with a reasonable re-evaluation of the morality (or lack thereof) of each behavior.
To use a trivial example, at some point in human history, it became possible to digitally copy musical performances without paying any money to the original authors. The deontologists are debating to this very day whether such actions count as “theft” or not, “theft” being a prohibited behavior under one specific maxim. Unfortunately, this new behavior doesn’t quite fit the parameters of the original maxim (which was invented before information technology became widespread), hence the debates. But if we dispense with the labels, and attempt to evaluate whether digital music copying ultimately causes more harm than good (or vice versa), then we can at least make some progress.
In addition, though, as humans acquire more knowledge of and more power over their environment, the set of possible behaviors keeps changing (usually, by increasing in size). This presents a problem for the deontologist, who has to invent new maxims just to keep up (as well as convincing others to use the new maxims which, as you recall, are entirely arbitrary), as well as to possibly revise existing maxims (ditto). The consequentialist, on the other hand, can apply his existing utility function to the new behaviors, or plug the new data into it, in order to come up with a reasonable re-evaluation of the morality (or lack thereof) of each behavior.
Upvoted for this, and the excellent (if trivial) digital copying example.
I will add that progress in such cases may also sometimes be made by attempting to discern just what are the origins of our moral intuitions about the wrongness of theft, seeing if those intuitions may be decomposed, and whether they may be reconstructed to yield some concepts that are appropriate to the digital realm. (I’ve got an essay where I attempt to do just that for software piracy, which I may post online at some point...)
The general principle here is that since the basis of our consequentialism systems is the contents of our brains, we can refer to the source material for guidance (or attempt to, anyway). With deontology, since it doesn’t reduce to anything, that move is not open to us. (I think. I remain unclear about where the rules in a deontological system come from.)
Changing the maxims is exactly the problem. Given that deontological maxims are essentially arbitrary; and given that the space of all possible human behaviors is quite large; it is already pretty difficult to construct a set of maxims that will account for all relevant behaviors that are currently possible.
Utility functions have the same problem. See blow for more details.
The consequentialist, on the other hand, can apply his existing utility function to the new behaviors, or plug the new data into it, in order to come up with a reasonable re-evaluation of the morality (or lack thereof) of each behavior.
Huh? This doesn’t resemble the behavior of any consequentialist I have ever encountered. In practice when presented with new possibilities, consequentialists wind up doing logical back flips to avoid having to do things, such as torturing children to cure malaria, that they find deontologically repugnant.
Utility functions have the same problem. See blow for more details.
Yes, of course. I have already said that a deontological system with a single rule that says, “maximize utility function F” would be equivalent to consequentialism, and thus they would share the same problems. However, in practice deontological systems tend to have many more immutable rules than that, and thus they are more susceptible to said problems, as per my previous post.
This doesn’t resemble the behavior of any consequentialist I have ever encountered. In practice when presented with new possibilities, consequentialists wind up doing logical back flips to avoid having to do things...
That sounds like you’re saying, “no one I know is actually a consequentialist, they are all crypto-deontologists in reality”, which may be true but is not relevant.
In addition, you may disagree with the decision to torture children to cure malaria; and that action may in fact be objectively wrong; but nowhere did I say that real consequentialists will always make correct decisions. By analogy, GPS navigation systems don’t give us perfect answers every time, but that doesn’t mean that the very concept of GPS navigation is invalid.
However, in practice deontological systems tend to have many more immutable rules than that, and thus they are more susceptible to said problems, as per my previous post.
What problems would those be? The only problems you mentioned in your previous post are:
Changing the maxims is exactly the problem. Given that deontological maxims are essentially arbitrary; and given that the space of all possible human behaviors is quite large; it is already pretty difficult to construct a set of maxims that will account for all relevant behaviors that are currently possible.
and
In addition, though, as humans acquire more knowledge of and more power over their environment, the set of possible behaviors keeps changing (usually, by increasing in size). This presents a problem for the deontologist, who has to invent new maxims just to keep up (as well as convincing others to use the new maxims which, as you recall, are entirely arbitrary), as well as to possibly revise existing maxims (ditto).
When I pointed out that consequentialists have the same problems with changing their utility functions, you declared it “true but not relevant”.
In addition, you may disagree with the decision to torture children to cure malaria; and that action may in fact be objectively wrong; but nowhere did I say that real consequentialists will always make correct decisions. By analogy, GPS navigation systems don’t give us perfect answers every time, but that doesn’t mean that the very concept of GPS navigation is invalid.
This analogy isn’t accurate. I’m not saying looking at consequences/GPS navigation is invalid. You’re the one whose saying all non-GPS navigation is invalid/look only at consequences.
Wait, what? What Bugmaster described sounds like the behavior of most of the consequentialists I’ve encountered.
Also, I don’t see what the linked situation (i.e. torture vs. malaria) has actually to do with the current issue. The issue Bugmaster raises is that of new behaviors that don’t precisely resemble any existing behaviors. How does the malaria-children-torture case fit that category?
The issue Bugmaster raises is that of new behaviors that don’t precisely resemble any existing behaviors. How does the malaria-children-torture case fit that category?
When presented with a new potential behavior, in this case torturing children to cure malaria, that provides an actual consquentialist reason for doing something deontologically repugnant, he winds up doing logical back flips.
The issue is that the consequentialist has a secret set of deontological maxims, and he chose his utility function to avoid being forced to violate them; he thus has problems when it turns out he does have to violate them to maximize the utility function. His first reaction to this is frequently to deny that the repugnant action would in fact maximize his utility function, sometimes even resorting to anti-epistomology in order to do so. If that fails he will change his utility function, do this enough and the utility function starts to resemble a count of the number of maxim violations.
Edit: Of course, the other possibility is that the consequentialist decides that the repugnant action isn’t so repugnant after all and commences torturing children.
First of all, I must ask that you stop equating utilitarianism with consequentialism.
Second of all, torturing children is not a new behavior, in the way Bugmaster was using the phrase. A new behavior is something that wasn’t available before, wasn’t possible, like “copying digital media”. You couldn’t copy digital media in the year 1699 no matter what your moral beliefs were. You could, on the other hand, torture children all you liked.
First of all, I must ask that you stop equating utilitarianism with consequentialism.
Where am I doing that? I don’t think the word “utiliterian” was even used in this discussion previously, I tend to avoid using it since it has several similar but different definitions and thus tends to cause confusion in discussions.
Second of all, torturing children is not a new behavior
True, but torturing children to cure malaria is. Another example that may make things clearer is wire-heading, which causes problems for a utility function that hasn’t sufficiently specified what it means by “pleasure” just as “copying digital media” can cause problems for maxims that haven’t specified what they mean by “theft”.
Where am I doing that? I don’t think the word “utiliterian” was even used in this discussion previously, I tend to avoid using it since it has several similar but different definitions and thus tends to cause confusion in discussions.
My entire point is that you are ascribing things to consequentialism that are true of utilitarianism, but are not true of consequentialism-in-general.
Ok, I was occasionally talking about Von Neumann–Morgenstern consequentialism since that’s what most consequentialists around here are. If you mean something else by “consequentialism”, please define it. We may have a failure to communicate here.
One may be a consequentialist without adhering to the von Neumann-Morgenstern axioms. “Consequentialism” is a fairly general term; all it means is “evaluates normative properties of things[1] on the basis of consequences” (”… rather than other things, such as the properties of the thing itself, that are not related to consequences”).
The SEP article on consequentialism is, as usual, a good intro/summary. To give a flavor of what other kinds of consequentialism one may have, here, to a first approximation, is my take on the list of claims in the “Classic Utilitarianism” section of the article:
Consequentialism: yes. Actual Consequentialism: no. Direct Consequentialism: no. Evaluative Consequentialism: yes, provisionally. Hedonism: no. Maximizing Consequentialism: intuition says no, because it seems to exclude the notion of supererogatory acts. Aggregative Consequentialism: intuition says yes, but this is problematic (Bostrom 2011) [2], so perhaps not. Total Consequentialism: probably not (though average is wrong too; then again, without the aggregative property, I don’t think this problem even arises). Universal Consequentialism: intuition says no, but I have a feeling that this is problematic; then again, a “yes” answer to this, while clearly more consistent, fails to capture some very strong moral intuitions. Equal Consideration: see the universal property; same comment. Agent-neutrality: seems like obviously yes but this is one I admit I know little about the implications of.
As you can see, I reject quite a few of the claims that one must assent to in order to be a classic utilitarian (and a couple which are required for VNM-compliance), but I remain a consequentialist.
[1] Usually “things” = acts, “properties” = moral rightness. [2] Infinite Ethics
One may be a consequentialist without adhering to the von Neumann-Morgenstern axioms. “Consequentialism” is a fairly general term; all it means is “evaluates normative properties of things[1] on the basis of consequences” (”… rather than other things, such as the properties of the thing itself, that are not related to consequences”).
Should I take that to mean that only on the basis of consequences, or on the basis of consequences and other things?
Edit: Although one of the interesting conclusions of Bostrom’s aforementioned paper is that bounding aggregative consequentialism with deontology gives better[1] results than just applying consequentialism. (Which I take to cast doubt on the aggregative property, among other things, but it’s something to think about.)
[1] “Better” = “in closer accord with our intuitions”… sort of. More or less.
Ok, in that case most of my criticism of consequentialism still applies, just replace “utility function” with whatever procedure general consequentialists use to compute moral actions.
Consequentialists get their “whatever procedure” from looking at human moral intuitions and shoring them up with logic — making them more consistent (with each other, and with themselves given edge cases and large numbers and so forth), etc., while hewing as close to the original intuitions.
It’s a naturalistic process. It’s certainly not arbitrarily pulled from nowhere. The fact is that we, humans, have certain moral intuitions. Those intuitions may be “arbitrary” in some abstract sense, but they certainly do exist, as actual, measurable facts about the world (since our brains are part of the world, and our brains are where those intuitions live).
I mean, I’m not saying anything new here. Eliezer had a whole sequence about more or less this topic. Robin Hanson wrote a paper on it (maybe multiple papers, but I recall one off the top of my head).
Now, you could ask: well, why look to our moral intuitions for a source of morality? And the answer is: because they’re all we have. Because they are what we use (the only thing we could use) to judge anything else that we select as the source of morality. Again, this stuff is all in the Sequences.
Consequentialists get their “whatever procedure” from looking at human moral intuitions and shoring them up with logic — making them more consistent (with each other, and with themselves given edge cases and large numbers and so forth), etc., while hewing as close to the original intuitions.
Really, to me it looks more like they take one moral intuition extrapolate it way beyond it’s context and disregard the rest.
We also have a lot of deontological moral intuitions and even more virtue ethical moral intuitions.
I mean, I’m not saying anything new here. Eliezer had a whole sequence about more or less this topic.
If you mean the meta-ethics sequence, it’s an argument for why we base our morality on intuitions (and even then I don’t think that’s an entirely accurate summary), it’s argument for pure consequentialism is a lot weaker and relies entirely on the VNM theorem. Since you’ve claimed not to be a VNM consequentialist, I don’t see how that sequence helps you. Also you do realize there are bookshelves full of philosophers who’ve reached different conclusions?
Now, you could ask: well, why look to our moral intuitions for a source of morality? And the answer is: because they’re all we have.
Would you apply the same logic to claim that our physical intuitions as our only source of physics? Or to use an even more obvious parallel, that our mathematical intuitions are our only source of mathematics. In a sense these statements are indeed true, but it is certainly misleading to phrase it that way.
Also, if you say moral intuition is our only source of morality, if people’s moral intuitions differ, are they obligated to obey their personal moral intuition If so, does that mean it’s moral for me to murder if my intuition says so? If not, whose intuition should we use?
Really, to me it looks more like they take one moral intuition extrapolate it way beyond it’s context and disregard the rest.
Which moral intuition is that...?
Also you do realize there are bookshelves full of philosophers who’ve reached different conclusions?
Yes, I studied some of them in college. My assessment of academic philosophers is that most of them are talking nonsense most of the time. There are exceptions, of course. If you want to talk about the positions of any particular philosopher(s), we can do that (although perhaps for that it might be worthwhile to start a new Discussion thread, or something). But just the fact that many philosophers think some particular thing isn’t strong evidence of anything interesting or convincing.
Would you apply the same logic to claim that our physical intuitions as our only source of physics? Or to use an even more obvious parallel, that our mathematical intuitions are our only source of mathematics. In a sense these statements are indeed true, but it is certainly misleading to phrase it that way.
Um, what logic? For physics and mathematics the claim that “our X-ical intuitions are our only source of X” is simply false: for physics we can do experiments and observe the real world, whereas mathematics… well, there’s more than one way to view it, but if you take mathematics to consist merely of formal systems, then those systems have no “source” as such. Insofar as any of those formal systems describe any aspect of reality, we can look at reality and see that.
For morality there just isn’t anything else, beyond our intuitions.
Also, if you say moral intuition is our only source of morality, if people’s moral intuitions differ, are they obligated to obey their personal moral intuition If so, does that mean it’s moral for me to murder if my intuition says so? If not, whose intuition should we use?
Moral laws don’t exist anywhere outside of human brains, so in one sense this entire line of questioning is meaningless. It’s not like moral laws can actually compel you to do one thing or another, regardless of whether you are a consequentialist or a deontologist or what. Moral laws have force insofar as they are convincing to any humans who have the power to enforce them, whether this be humans deciding to follow a moral law in their own lives, or deciding to impose a moral law on others, etc.
If people’s moral intuitions differ then I guess those people will have to find some way to resolve that difference. (Or maybe not? In some cases they can simply agree to go their separate ways. But I suppose you’d say, and I’d agree, that those are not the interesting cases, and that we’re discussing those cases where the disagreement on morality causes conflict.)
I mean, I can tell you what tends to happen in practice when people disagree on morality. I can tell you what I in particular will do in any given case. But asking what people should do in cases of moral disagreement is just passing the buck.
I hope you’re not suggesting that deontology, or any other system, has some resolution to all of this? It doesn’t seem like you are, though; I get the sense that you are merely objecting to the suggestion that consequentialism has the answers, where deontology does not. If so, then I grant that it does not. However, these are not the questions on which basis I judge deontology to be inferior.
Rather, my point was that even if we grant that there are, or should be, absolute, unbreakable moral laws that judge actions, regardless of consequences (i.e. accept the basic premise of deontology), it’s entirely unclear what those laws should be, or where they come from, or how we should figure out what they are, or why these laws and not some others, etc. Consequentialism doesn’t have this problem. Furthermore, because moral intuitions are the only means by which we can judge moral systems, the question of whether a moral system satisfies our moral intuitions is relevant to whether we accept it. Deontology, imo, fails in this regard to a much greater degree than does consequentialism.
Um, what logic? For physics and mathematics the claim that “our X-ical intuitions are our only source of X” is simply false: for physics we can do experiments and observe the real world,
Because our physical intuitions tell us that should work.
whereas mathematics… well, there’s more than one way to view it, but if you take mathematics to consist merely of formal systems, then those systems have no “source” as such.
Then why are we focusing on those particular formal systems? Also where do our ideas about how formal systems should work come from?
I hope you’re not suggesting that deontology, or any other system, has some resolution to all of this?
Well, look at the game theory based decision theories, notice that they seem to be converging on something resembling Kantian deontology. Also, what do you hope that, don’t you want the issue resolved?
Because our physical intuitions tell us that should work.
I’m not really sure what you mean by this.
Then why are we focusing on those particular formal systems?
Why indeed? Mathematics does sometimes examine formal systems that have no direct tie to anything in the physical world, because they are mathematically interesting. Sometimes those systems turn out to be real-world-useful.
Also where do our ideas about how formal systems should work come from?
What do you mean, “how formal systems should work”? Formal systems are defined in a certain way. Therefore, that is how they work. Why do we care? Well, because that’s an approach that allows us to discover/invent new math, and apply that math to solve problems.
Well, look at the game theory based decision theories, notice that they seem to be converging on something resembling Kantian deontology.
Really? Kantian deontology, and definitely not rule consequentialism?
Also, what do you hope that, don’t you want the issue resolved?
I meant, by that, that such a claim would be clearly false. If you were claiming clearly false things then that would make this conversation less interesting. ;)
Because our physical intuitions tell us that should work.
I’m not really sure what you mean by this.
Where does your belief that observing the world will lead us to true beliefs come from?
What do you mean, “how formal systems should work”? Formal systems are defined in a certain way.
First, where do those definitions come from? Second, as Lewis Carrol showed a definition of a formal system is not the same as a formal system since definitions of a formal system don’t have the power to force you to draw conclusions from premises.
Really? Kantian deontology, and definitely not rule consequentialism?
Yes, you may want to look into decision theories many of which take superrationality as their staring point. Or do you mean taking the Categorical Imperative as a rule consequentialist rule?
I meant, by that, that such a claim would be clearly false.
Careful, just because you can’t think of a way to resolve a philosophical problem, doesn’t mean there is to way to resolve it.
… and many posts in the Sequences. (The posts/essays themselves aren’t an answer to “where does this belief come from”, but their content is.)
First, where do those definitions come from?
We made ’em up.
Second, as Lewis Carrol showed a definition of a formal system is not the same as a formal system since definitions of a formal system don’t have the power to force you to draw conclusions from premises.
Yes, you may want to look into decision theories many of which take superrationality as their staring point. Or do you mean taking the Categorical Imperative as a rule consequentialist rule?
I am passingly familiar with these systems. I don’t know why you would claim that they have anything to do with deontology, since the entire motivation for accepting superrationality is “it leads to better consequences”. If you follow unbreakable rules because doing so leads to better outcomes, then you are a consequentialist.
Careful, just because you can’t think of a way to resolve a philosophical problem, doesn’t mean there is to way to resolve it.
Um, ok, fair enough, so in that case how about we stop dancing around the issue, and I will just ask straight out:
Do you believe that deontology has a resolution to the aforementioned issues? Or no?
Upvoted for spotting something probably non-obvious: the parallel between Kantian ethics and certain decision theories seems quite interesting and never occurred to me. It’s probably worth exploring how deep it runs, perhaps the idea that being a rational agent in itself compels you inescapably to follow rules of a certain form might have some sort of reflection in these decision theories.
Also, [why] do you hope that, don’t you want the issue resolved?
I certainly would hope that there doesn’t turn out to be a universal cosmic moral law derivable from nothing but logic, if it happens to be a law I really hate like “you must kill kittens”. :)
We also have a lot of deontological moral intuitions and even more virtue ethical moral intuitions.
This is true. Personally, I think that to the extent that those intuitions ought to be satisfied, they are compatible with consequentialism. This isn’t 100% true, but it’s fairly close, it seems to me.
Those intuitions involve caring about things besides consequences. One way to deal with this is to say that those intuitions shouldn’t be satisfied, but you are left with the question of on what basis you are making that claim. The other way I’ve seen people deal with it is to expend the definition of “consequences” until the term is so broad as to be meaningless.
I agree that the latter maneuver is a poor way to go. The former does make the resulting morality rather unsatisfactory.
My view —
Personally, I think that to the extent that those intuitions ought to be satisfied, they are compatible with consequentialism.
— is another way of saying that some intuitions that seem deontological or virtue-ethical are in fact consequentialist. Others are not consequentialist, but don’t get in the way of consequentialism, or satisfying them leads to good consequences even if the intuitions themselves are entirely non-consequentialist. The remainder generally shouldn’t be satisfied, a decision that we reach in the same way that we resolve any conflict between our moral intuitions:
For example, do you think creating a person and then killing him is morally equivalent to not creating him in the first place because the consequences are the same?
Can you expand on what you mean by “final outcome” here, and why it matters?
For my part, I would say that the difference between the world in which a person lives N years and then dies and all the effects of that person’s actions during those N years are somehow undone, and the world in which they didn’t live at all, is the N years of that person’s life.
What you seem to want to say is that those N years aren’t a consequence worthy of consideration, because after the person’s death they aren’t alive anymore, and all that matters is the state of the world after their death. Did I get that right?
That puzzles me. It seems that by this reasoning, I can just as readily conclude if the universe will ultimately achieve a maximum-entropy condition, then a consequentialist must conclude that all actions are ultimately equally moral, since the “final outcome” will be identical.
At the risk of repeating myself: it seems to me that if action A results in a year of my life followed by the eradication of all traces of my existence, and action B results in two years of my life followed by the eradication of all traces of my existence, then if I consider years of my life an important differential consequence with which to evaluate the morality of actions at all, I should prefer B to A since it creates an extra year of my life, which I value.
The fact that the state of the world after two years is identical in both branches of this example isn’t the only thing that matters to me, or even the thing that matters most to me.
For my own part, I don’t see how that makes “consequences” a meaningless term, and I can’t see why anyone for whom the only consequences that matter are the “final” outcome should be a consequentialist, or care about consequences at all.
Again, I suspect this is a terminological confusion—a confusion over what “consequentialism” actually means caring about.
To you—and me—a “consequence” includes the means, the end, and any inadvertent side-effects. Any result of an action.
To Eugine, and some others, it includes the end, and any inadvertent side-effects; but apparently the path taken to them, the means, is not included. I can see how someone might pick up this definition from context, based on some of the standard examples. I’ve done similar things myself with other words.
(As a side note, I have also seen it assumed to include only the end—the intended result, not any unintended ones. This is likely due to using consequentialism to judge people, which is not the standard usage but common practice in other systems.)
Perhaps not coincidentally, I have only observed the latter two interpretations in people arguing against consequentialism, and/or the idea that “the ends justify the means”. If you’re interested, I think tabooing the terms involve might dissolve some of their objections, and you both may find you now disagree less than you think. But probably still a bit.
As I understand Eugine, he’d say that in my example above there’s no consequentialist grounds for choosing B over A, since in two years the state of the world is identical and being alive an extra year in the interim isn’t a consequence that motivates choosing B over A.
If I’ve understood properly, this isn’t a terminological confusion, it’s a conflict of values. If I understood him correctly, he thinks it’s absurd to choose B over A in my example based on that extra year, regardless of whether we call that year a “consequence” or something else.
That’s why I started out by requesting some clarification of a key term. Given the nature of the answer I got, I decided that further efforts along these lines would likely be counterproductive, so I dropped it.
As I understand Eugine, he’d say that in my example above there’s no consequentialist grounds for choosing B over A, since in two years the state of the world is identical and being alive an extra year in the interim isn’t a consequence that motivates choosing B over A.
Right, as a reductio of choosing based on “consequentialist grounds”. His understanding of “consequentialist grounds”.
A reductio argument, as I understand it, adopts the premise to be disproved and shows how that premise leads to a falsehood. What premise is being adopted here, and what contradiction does it lead to?
Um, the premise is that only “consequences” or final outcomes matter, and the falsehood derived is that “creating a person and then killing him is morally equivalent to not creating him in the first place because the consequences are the same”.
But it looks like there may be an inferential distance between us? Regardless, tapping out.
My understanding of consequentialism is similar to yours and TheOtherDave’s. In a chain of events, I consider all events in the chain to be a consequence of whatever began the chain, not just the final state.
I can see how someone might pick up this definition from context, based on some of the standard examples
I can’t, to be honest. Pretty much all the standard examples that I can think of relating to consequentialism fall into one of two categories: first, thought experiments aimed at forcing counterintuitive behavior out of some specific dialect of utilitarianism (example: the Repugnant Conclusion); and second, thought experiments contrasting some noxious means with a desirable end (example: the trolley problem).
Biting the bullet on the latter is a totally acceptable response and is in fact one I endorse; but I can’t see how you can look at e.g. the trolley problem and conclude that people biting that bullet are ignoring the fat man’s life; its loss is precisely what makes the dilemma a dilemma. Unless I totally misunderstand what you mean by “means”.
Now, if you’re arguing for some non-consequential ethic and you need some straw to stuff your opponent with… that’s a different story.
Biting the bullet on the latter is a totally acceptable response and is in fact one I endorse; but I can’t see how you can look at e.g. the trolley problem and conclude that people biting that bullet are ignoring the fat man’s life
They’re not ignoring his life, they’re counting it as 1 VP (Victory Point) and contrasting with the larger number of VP’s they can get by saving the people on the track. The fact that you kill him directly is something your not allowed to consider.
Well, nothing in the definition of consequential ethics requires us to be looking exclusively at expected life years or pleasure or pain. It’s possible to imagine one where you’re summing over feelings of violated boundaries or something, in which case the fact that you’ve killed the guy directly becomes overwhelmingly important and the trolley problem would straightforwardly favor “do not push”. It’s just that most consequential ethics don’t, so it isn’t; in other words this feature emerges from the utility function, not the metaethical scheme.
(As an aside, it seems to me that preference utilitarianism—which I don’t entirely endorse, but which seems to be the least wrong of the common utilitarianisms—would in many cases weight the fat man’s life more heavily than that of a random bystander; many people, given the choice, would rather die by accident than through violence. It wouldn’t likely be enough to change the outcome in the standard 1:5 case, but it would be enough to make us prefer doing nothing in a hypothetical 1:1 case, rather than being indifferent as per total utilitarianism. Which matches my intuition.)
That was one example in a very large space of possibilities; you can differentiate the consequences of actions in any way you please, as long as you’re doing so in a well-behaved way. You don’t even need to be using a sum—average utilitarianism doesn’t.
This does carry a couple of caveats, of course. Some methods give much less pathological results than others, and some are much less well studied.
Summing over actual violated boundaries is also a possible consequentialism, but it does not seem to capture the intuitions of those deontological theories which disallow you to push the fat guy. Suppose the driver of the trolley is a mustache-twirling villain who has tied the other five people to the tracks deliberately to run the trolley over them (thus violating their boundaries). Deontologists would say this makes little difference for your choice in the dilemma, you are still not permitted to throw the fat man on the tracks to save them. This deontological rule cannot be mimicked with a consequentialism that assigns high negative value to boundary-violations regardless of agent. It can, perhaps, (I am not entirely sure) be mimicked with a consequentialism that assigns high negative value to the subjective feeling of violating a boundary yourself.
To Eugine, and some others, it includes the end, and any inadvertent side-effects; but apparently the path taken to them, the means, is not included.
Well, most of the well known consequentialist dilemmas rely on forbidding considering the path, in fact not caring about is one of the premises of the VNM theorem.
most of the well known consequentialist dilemmas rely on forbidding considering the path
As I said, “I can see how someone might pick up this definition from context, based on some of the standard examples.”
I don’t think it’s the intention of those examples, however—at least, not the ones that I’m thinking of. Could you describe the ones you have in mind, so we can compare interpretations?
not caring about is one of the premises of the VNM theorem
I … think this is a misinterpretation, but I’m most definitely not a domain expert, so could you elaborate?
How does saying that something positive-utility remains good independant of other factors, and something negative-utility remains bad, preclude caring about those other factors too? If it did, why would that only include “the path”, and not other things we care about, because other subsets of reality are good or bad independant of them too?
Don’t get me wrong; I understand that in various deontological and virtue ethics systems we wouldn’t care about the “end” at all if it were reached through incorrect “means”. Consequentialists reject this*; but by comparing the end and the means, not ignoring the means altogether! At least, in my limited experience, anyway.
Again, could you please describe some of the thought experiments you were thinking of?
*(although they don’t all care for independence as an axiom, because it doesn’t apply to instrumental goals, only terminal ones)
To take an extreme example, in the classic cannibal lifeboat scenario, the moral solution is generally considered to draw straws. That is, this is considered preferable to just eating Bill, or Tom for that matter, even though according to the independence axiom there should be a particular person among the participants sacrificing whom would maximize utility.
I don’t think that’s a consequentialist thought experiment, though? Could you give examples of how it’s illustrated in trolley problems, ticking time bomb scenarios, even forced-organ-donation-style “for the greater good” arguments? If it’s not too much trouble—I realize you’re probably not anticipating huge amounts of expected value here.
(I think most LW-style utilitarian consequentialists would agree there is probably an optimal one, but unilaterally deciding that yourself might lead to additional consequences—better to avoid selfish infighting and, most importantly, perceived unfairness, especially when you may be too uncertain about the outcomes anyway. So that’s a data point for you.)
I don’t think that’s a consequentialist thought experiment, though?
What do you mean by “consequentialist thought experiment”?
I think most LW-style utilitarian consequentialists would agree there is probably an optimal one, but unilaterally deciding that yourself might lead to additional consequences—better to avoid selfish infighting and, most importantly, perceived unfairness, especially when you may be too uncertain about the outcomes anyway. So that’s a data point for you.
Yes, you can always argue that any behavior is instrumental, replacing it with the reason it came to be thought of as moral, but if you go down that route, you’ll end up concluding the purpose of life is to maximize inclusive genetic fitness.
What do you mean by “consequentialist thought experiment”?
One of the standard thought experiments used to demonstrate and/or explain consequentialism. I’m really just trying to see what your model of consequentialism is based on.
Yes, you can always argue that any behavior is instrumental, replacing it with the reason it came to be thought of as moral, but if you go down that route, you’ll end up concluding the purpose of life is to maximize inclusive genetic fitness.
Well, we’re adaptation-executors, not fitness-maximizers—the environment has changed. But yeah, there’s a very real danger in coming up with grandiose rationalizations for how all your moral intuitions are really consequences of your beautifully simple unified theory.
there’s a very real danger in coming up with grandiose rationalizations for how all your moral intuitions are really consequences of your beautifully simple unified theory.
And there’s a very real danger of this being a fully general counterargument against any sufficiently simple moral theory.
You’re absolutely right about that. In fact, there’s a danger that it can be a fully general counterargument against any moral theory at all! After all, they might simply be rationalizing away the flaws...
I wouldn’t endorse using it as a counterargument at all, honestly. If you can point out actual rationalizations, that’s one thing, but merely calling someone a sophisticated arguer is absolutely a Bad Idea.
I think that’s one of the areas where Eliezer got it completely wrong. Value isn’t that complex, and it’s a mistake to take people’s apparent values at face value as he seems to.
Our values are psychological drives from a time in our evolutionary history before we could possibly be consequentialist enough to translate a simple underlying value into all the actions required to satisfy it. Which means that evolution had to bake in the “break this down into subgoals” operation, leaving us with the subgoals as our actual values. Lots of different things are useful for reproduction, so we value lots of different things. I would not have found that wiki article convincing either back when I believed as you believe, but have you read “Thou art godshatter?”
People have drives to value different things, but a drive to value is not the same thing as a value. For example, people have an in-group bias (tribalism), but that doesn’t mean that it’s an actual value.
If values are not drives (Note I am saying values are drives, not “driives are values”, “drives to value are values”, or anything else besides “values are drives”), what functional role do they play in the brain? What selection pressure built them into us? Or are they spandrels? If this role is not “things that motivate us to choose one action over another,” why are they motivating you to choose one action over another? If that is their role, you are using a weird definition of “drive”, so define “Fhqwhgads” as “things that motivate us to choose one action over another”, and substitute that in place of “value” in my last argument.
If values are drives, but not all drives are values, then…
(a) if a value is a drive you reflectively endorse and a drive you reflectively endorse is a value, then why would we evolve to reflectively endorse only one of our evolved values?
(b) otherwise, why would either you or I care about what our “values” are?
I agree that values are drives, but not all drives are values. I dispute that we would reflectively endorse more than one of our evolved drives as values. Most people aren’t in a reflective equilibrium, so they appear to have multiple terminal values—but that is only because they aren’t’ in a reflective equilibrium.
What manner of reflection process is it that eliminates terminal values until you only have one left? Not the one that I use (At least, not anymore, since I have reflected on my reflection process). A linear combination (or even a nonlinear combination) of terminal values can fit in exactly the same spot that a single value could in a utility function. You could even give that combination a name, like “goodness”, and call it a single value (though it would be a complex one). So there is nothing inconsistent about having several separate values.
Let me hazard a guess, based on my own previous reflection process, now abandoned due to meta-reflection. First, I would find a pair of thought experiments where I had strong feelings for an object-level choice in each, and I felt I was being inconsistent between them. Of course, object-level choices in two different scenarios can’t be inconsistent. There is a computation that returns both of those answers, namely, whatever was going on in your pre-reflection brain.
For example, “throw the level, redirect the trolley to kill 1 instead of 5” and “don’t butcher the healthy patient and steal their organs to save five.”
The inconsistence is in the two principles I would have automatically come up with to explain two different object-level choices. Or, if my reasons for one emotional reaction are too complicated for me to realize, then it’s between one principle and the emotional reaction. Of course, the force behind the principle comes from the emotional reaction to the thought experiment which motivated it.
Then, I would let the two emotions clash against each other, letting my mind flip between the two scenarios back and forth until one started to weaken. The winner would become stronger, because it survived a clash. And so did the principle my mind coughed up to explain it.
What are the problems with this?
It favors simple principles for the sole reason that they are easier to guess by my conscious mind, which of course doesn’t really have access to the underlying reasons. It just thinks it does. This means it depends on my ignorance of other more complicated principles. This part can be destroyed by the truth.
The strength of the emotion for the object-level choice is often lent to the principle by something besides what you think it is. Yvain covered this in an essay that you, being a hedonistic utilitarian would probably like: Wirehead Gods on Lotus Thrones. His example is that being inactive and incredibly happy without interuption forever sounds good to him if he thinks of Buddhists sitting on lotuses and being happy, but bad if he thinks of junkies sticking needles in their arms and being happy. With this kind of reflection, you consciously think something like: “Of course, sitting on the lotus isn’t inherently valuable, and needles in arms aren’t inherently disvaluable either,” but unconsciously, your emotional reaction to that is what’s determining which explicit principles like “wireheading is good” or “wireheading is bad” you consciously endorse.
All of your standard biases are at play in generating the emotional reactions in the first place. Scope insensitivity, status quo bias, commitment bias, etc.
This reflection process can go down different paths depending on the order that thought experiments are encountered. If you get the “throw switch, redirect trolley” one first, and then are told you are a consequentialist, and that there are other people who don’t throw the switch because then they are personally killing someone, and you think about their thought process and reject it as a bad principle, and then you see the “push the fat man off the bridge” one, and you think “wow, this really feels like I shouldn’t push him off the bridge, but [I have this principle established where I act to save the most lives, not to keep my hands clean]”, and slowly your instinct (like mine did) starts to become “push the fat man off the bridge.” And then you hear the transplant version, and you become a little more consequentialist. And so on. It would be completely different if most people heard the transplant one first (or an even more deontology-skewed thought experiment). I am glad of course, that I have gone down this path as far as I have. Being a consequentialist has good consequences, and I like that! But my past self might not have agreed, and likewise I probably won’t agree with most possible changes to my values. Each version of me judges differences between the versions under its own standards.
There’s the so called sacred vs. secular value divide (I actually think it’s more of a hierarchy, with several layers of increasing sacredness, each of which feels like it should lexically override the last), where pitting a secular value vs a sacred value makes the secular value weaker and the sacred one stronger. But which values are secular or sacred is largely a function of what your peers value.
And whether a value becomes stronger or weaker through this process depends largely on which pairs of thought experiments you happen to think of. Is a particular value, say “artistic expression”, being compared to the value of life, and therefore growing weaker, or is it being compared to the value of not being offended, and therefore growing stronger?
So that you don’t ignore my question like you did the one in the last post, I’ll reiterate it. (And I’ll add some other questions).
What process of reflection are you using that you think leads people toward a single value?
Does it avoid the problems with my old one that I described?
Is this a process of reflection most people would meta-reflectively endorse over alternative ones that don’t shrink them down to one value? (If you are saying that people who have several values are out of reflective equilibrium, then you’d better argue for this point.)
I endorse the the process you rejected. I don’t think the problems you describe are inevitable. Given that, if people’s values cause them conflict in object-level choices, they should decide what matters more, until they’re at a reflective equilibrium and have only one value.
But how do you avoid those problems? Also, why should contemplating tradeoffs between how much we can get values force us to pick one? I bet you can imagine tradeoffs between bald people being happy, and people with hair being happy, but that doesn’t mean you should change your value from “happiness” to one of the two. Which way you choose in each situation depends on how many bald people there are, and how many non-bald people there are. Similarly, with the right linear combination, these are just tradeoffs, and there is no reason to stop caring about one term because you care about the other more. And you didn’t answer my last question. Why would most people meta-reflectively endorse this method of reflection?
1, as you said, can be destroyed by the truth (if they’re actually wrong), so it’s part of a learning process. 2 isn’t a problem once you isolate the principle by itself, outside of various emotional factors. 3 is a counterargument against any kind of decisionmaking, it means that we should be careful, not that we shouldn’t engage in this sort of reflection. 4 is the most significant of these problems, but again it’s just something to be careful about, same is in 3. As for 5, that’s to be solved by realizing that there are no sacred values.
why should contemplating tradeoffs between how much we can get values force us to pick one?
It doesn’t, you’re right. At least, contemplating tradeoffs doesn’t by itself guarantee that people would choose only one value, But it can force people to endorse conclusions that would seem absurd to them—preserving one apparent value at the expense of another. Once confronted, these tensions lead to the reduction to one value.
As for why people would meta-reflectively endorse this method of reflection—simply, because it makes sense.
So it seems to follow that if I offer someone the choice of murdering their child in exchange for greater pleasure, and they turn me down, we can confidently infer that they simply don’t believe I’ll follow through on the offer, because if they did, they would accept. Yes?
Believing that there is no such thing as greater pleasure than the loss from having your child murdered, is a subset of “not believing you’ll follow through on your offer”.
I don’t think you’re following that to the logical conclusion, though. You were implicitly arguing that most people’s refusal would not be based on “doesn’t believe I’ll follow through”. It is entirely plausible that most people would give the reason which I described, and as you have admitted, the reason which I described is a type of “doesn’t believe I’ll follow through”. Therefore, your argument fails, because contrary to what you claimed, most people’s refusal would (or at least plausibly could) be based on “doesn’t believe I’ll follow through”.
I agree that most people’s refusal would be based on some version of “doesn’t believe I’ll follow through.” I’m not clear on where I claimed otherwise, though… can you point me at that claim?
So it seems to follow that if I offer someone the choice of murdering their child in exchange for greater pleasure, and they turn me down, we can confidently infer that they simply don’t believe I’ll follow through on the offer, because if they did, they would accept. Yes?
It’s true that you didn’t explicitly claim people wouldn’t do that, but in context, you did implicitly claim that. In context, you were responding to something you disagreed with and so it must mean that you thought that they would not in fact do that and you were presenting the claim that they would not do that to support your argument.
Someone recently suggested that there should be a list of 5 geek linguistic fallacies and I wonder if something like this should go in the list.
Your response seems very strange because either you meant to imply what you implied (in which case you thought you could misrepresent yourself as not implying anything), or you didn’t (in which case you said a complete non-sequitur that by pure coincidence sounded exactly like an argument you might have made for real)
My original question was directed to blacktrance, in an attempt to clarify my understanding of their position. They answered my question, clarifying the point I wanted to clarify; as far as I’m concerned it was an entirely sucessful exchange.
You’ve made a series of assertions about my question, and the argument you inferred from it, and various fallacies in that argument. You are of course welcome to do so, and I appreciate you answering my questions about your inferences, but none of that requires any particular response on my part as far as I can tell. You’ve shared your view of what I’m saying, and I’ve listened and learned from it. As far as I’m concerned that was an entirely successful exchange.
I infer that you find it unsatisfying, though. Well, OK. Can you state what it is you’re trying to achieve in this exchange, and how I can help you achieve it?
I infer that you find it unsatisfying, though. Well, OK. Can you state what it is you’re trying to achieve in this exchange, and how I can help you achieve it?
It appeared that you’re either willfully deceptive or incapable of communicating clearly, in such a way that it looks willfully deceptive. I was hoping you’d offer another alternative than those.
The other alternative I offer is that you’ve been mistaken about my goals from the beginning.
As I said a while back: I asked blacktrance a question about their working model, which got me the information I wanted about their model, which made it clear where our actual point of disagreement was (specifically, that blacktrance uses “values” to refer to what people like and not what we want). I echoed my understanding of that point, they agreed that I’d understood it correctly, at which point I thanked him and was done.
My goal was to more clearly understand blacktrance’s model and where it diverged from mine; it wasn’t to challenge it or argue a position. Meanwhile, you started from the false assumption that I was covertly making an argument, and that has informed our exchange since.
If you’re genuinely looking for another alternative, I recommend you back up and examine your reasons for believing that.
That said, I assume from your other comments that you don’t believe me and that you’ll see this response as more deception. More generally, I suspect I can’t give you want you want in a form you’ll find acceptable.
If I’m right, then perhaps we should leave it at that?
No, for a few reasons. First, they may not believe that what you’re offering is possible—they believe that the loss of a child would outweigh the pleasure that you’d give them. They think that you’d kill the child and give them something they’d enjoy otherwise, but doesn’t make up for losing a child. Though this may count as not believing that you’ll follow through on your offer. Second, people’s action-guiding preferences and enjoyment-governing preferences aren’t always in agreement. Most people don’t want to be wireheaded, and would reject it even if it were offered for free, but they’d still like it once subjected to it. Most people have an action-guiding preference of not letting their children die, regardless of what their enjoyment-governing preference is. Third, there’s a sort-of Newcomblike expected value decision at work, which is that deriving enjoyment from one’s children requires valuing them in such a way that you’d reject offers of greater pleasure—it’s similar to one-boxing.
This begs the question of whether the word “pleasure” names a real entity. How do you give someone “pleasure”? As opposed to providing them with specific things or experiences that they might enjoy? When they do enjoy something, saying that they enjoy it because of the “pleasure” it gives them is like saying that opium causes sleep by virtue of its dormitive principle.
That’s one way to do it, but not the only way, and it may not even be conclusive, because people’s wants and likes aren’t always in agreement. The test is to see whether they’d like it, not whether they’d want it.
And there’s a very real danger of this being a fully general counterargument against any sufficiently simple moral theory.
Establishing a lower bound on the complexity of a moral theory that has all the features we want seems like a reasonable thing to do. I don’t think the connotations of “fully general counterargument” are appropriate here. “Fully general” means you can apply it against a theory without really looking at the details of the theory. If you have to establish that the theory is sufficiently simple before applying the counterargument, you are referencing the details of the theory in a way that differentiates from other theories, and the counterargument is not “fully general”.
To be more precise: given two possible actions A and B, which lead to two different states of the world Wa and Wb, all attributes of Wa that aren’t attributes of Wb are consequences of A, and all attributes of Wb that aren’t attributes of Wa are consequences of B, and can motivate a choice between A and B.
Some attributes shared by Wa and Wb might be consequences of A or B, and others might not be, but I don’t see why it matters for purposes of choosing between A and B.
To be more precise: given two possible actions A and B, which lead to two different states of the world Wa and Wb, all attributes of Wa that aren’t attributes of Wb are consequences of A, and all attributes of Wb that aren’t attributes of Wa are consequences of B, and can motivate a choice between A and B.
Ok, now you’re hiding the problem in the word “attribute” and to a certain extent “state of the world”, e.g., judging by your reaction to my previous posts I assume “state of the world” includes the world’s history, not just its state at a given time. Does it all include contrafactual states, a la, contrafactual mugging?
Well, I’d agree that there’s no special time such that only the state of the world at that time and at no other time matters. To talk about all times other than the moment the world ends as “the world’s history” seems a little odd, but not actively wrong, I suppose.
As for counterfactuals… beats me. I’m willing to say that a counterfactual is an attribute of a state of the world, and I’m willing to say that it isn’t, but in either case I can’t see how a counterfactual could be an attribute of one state of the world and not another. So I can’t see why it matters when it comes to motivating a choice between A and B.
Newcomb-like problems: I estimate my confidence (C1) that I can be the sort of person whom Omega predicts will one-box while in fact two-boxing, and my confidence (C2) that Omega predicting I will one-box gets me more money than Omega predicting I will two-box. If C1 is low and C2 is high (as in the classic formulation), I one-box.
Counterfactual-mugging-like problems: I estimate how much it will reduce Omega’s chances of giving $10K to anyone I care about if I reject the offer. If that’s low enough (as in the classic formulation), I keep my money.
The fact that the fundamental laws of physics are time-reversible makes such variations on the 1984-ish theme of “we can change the past” empirically wrong.
Of course, the other possibility is that the consequentialist decides that the repugnant action isn’t so repugnant after all and commences torturing children.
For the consequentialist to actually start torturing children for this reason, he would have to know, to a high degree of certainty, that the utility function is maximized by torturing children. It may be that, given that he doesn’t have perfect knowledge, he is incapable of knowing that to the required degree. This would mean that he remains a consequentialist but could not be induced to torture children.
Edit: There’s also the possibility that his decision affects how other people make decisions, which is itself a sort of consequence that has to be weighed. If many of the people around him are deontologists, torturing children may have the side effect of making torturing children more acceptable to the deontologists around him, leading to those deontologists torturing children in cases that have bad consequences.
That you can pick hypothetical conditions where your deontological intuition is satisfied by your “utility function” tells us nothing about the situations where the intuition is in direct conflict with your “utility function”.
Let’s make this simple: if you were certain your utility function was maximized by torturing children, would you do it?
As a side note, the topic seems to be utilitarianism, not consequentialism. The terms are not interchangeable.
EDIT: Sorry, turns out you already answered my question. Here are some replacement questions.
You’ve said that you will do nothing, rather than violate a right in order to prevent other rights being violated. Yet you also say that people attempting to violate rights waive their rights not to be stopped. Is this rule designed for the purpose of allowing you to violate people’s rights in order to protect others? That seems unfair to people in situations where there’s no clearly identifiable moustache-twirling villain.
You have also said that people can waive any of their rights—for example, people waive their right not to have sex in order to have sex, and people waive their right not to be murdered in order to commit suicide. Doesn’t this deny the existence of rape within marriage? Isn’t it, in fact, the exact argument that was used to oppose laws prohibiting rape within marriage? This seems worrying. (Obviously, there are other, similar cases that can be constructed, but this one is a major problem.)
Finally, you mention that some actions which do not violate rights are nonetheless “being a dick”, and you will act to prevent and punish these acts in order to discourage them. Doesn’t this imply that there are additional aspects to morality not contained by “rights”? Do you act as a Standard-LessWrong-Consequentialist-Utilitarian™ with regards to Not Being A Dick?
You’ve said that you will do nothing, rather than violate a right in order to prevent other rights being violated. Yet you also say that people attempting to violate rights waive their rights not to be stopped. Is this rule designed for the purpose of allowing you to violate people’s rights in order to protect others? That seems unfair to people in situations where there’s no clearly identifiable moustache-twirling villain.
I wish everyone in this thread would be more careful about using the word “right”. If you are trying to violate somebody’s rights, you don’t have “a right not to be stopped”. You have your perfectly normal complement of rights, and some of them are getting in the way of protecting someone else’s rights, so, since you’re the active party, your (contextually relevant) rights are suspended. They remain in effect out of that context (if you are coming at me with a knife I may violently prevent you from being a threat to me; I may not then take your wallet and run off cackling; I may not, ten years later, visit you in prison and inform you that your mother is dead when she is not; etc.).
You have also said that people can waive any of their rights—for example, people waive their right not to have sex in order to have sex, and people waive their right not to be murdered in order to commit suicide. Doesn’t this deny the existence of rape within marriage?
That’s a good question, but the answer is no. A marriage does not constitute a promise to be permanently sexually available. You could opt to issue standing permission, and I gather this was customary and expected in historical marriages, but you can revoke it at any time; your rights are yours and you may assert them at will. I don’t object to people granting each other standing permission to do things and sticking with it if that’s how they prefer to conduct themselves, but morally speaking the option to refuse remains open.
Finally, you mention that some actions which do not violate rights are nonetheless “being a dick”, and you will act to prevent and punish these acts in order to discourage them. Doesn’t this imply that there are additional aspects to morality not contained by “rights”?
No. There’s morality, and then there’s all the many things that are not morality. Consequentialists (theoretically, anyway) assign value to everything and add it all up according to the same arithmetic—with whatever epicycles they need not to rob banks and kidnap medical test subjects—but that’s not what I’m doing. Morality limits behavior in certain basic ways. You can be a huge dick and technically not do anything morally wrong. (And people can get back at you all kinds of ways, and not technically do anything morally wrong! It’s not a fun way to live and I don’t really recommend it.)
Do you act as a Standard-LessWrong-Consequentialist-Utilitarian™ with regards to Not Being A Dick?
No. Actually, you could probably call me sort of virtuist with respect to dickishness. I am sort of Standard-LessWrong-Consequentialist-Utilitarian™ with respect to prudence, which is a whole ’nother thing.
I wish everyone in this thread would be more careful about using the word “right”. If you are trying to violate somebody’s rights, you don’t have “a right not to be stopped”.
Well, sure. I did read your explanation(s). I was assuming the worst-case scenario for the hypothetical, where you have to violate someone’s rights in order to protect others. For example, the classic lying-to-the-nazis-about-the-jews scenario.
That’s a good question, but the answer is no. A marriage does not constitute a promise to be permanently sexually available.
Not anymore, no. Because we changed the rules. Because of all the rapes.
You could opt to issue standing permission, and I gather this was customary and expected in historical marriages, but you can revoke it at any time; your rights are yours and you may assert them at will.
I … see, that seems consistent. I assume you can waive the right to abolish an agreement at will, too? That’s the foundation of contract law, but I don’t want to assume.
There’s morality, and then there’s all the many things that are not morality … You can be a huge dick and technically not do anything morally wrong. (And people can get back at you all kinds of ways, and not technically do anything morally wrong! It’s not a fun way to live and I don’t really recommend it.)
Indeed, and that’s what I’m asking about. What is this “don’t be a dick” function, and what place does it hold?
No. Actually, you could probably call me sort of virtuist with respect to dickishness. I am sort of Standard-LessWrong-Consequentialist-Utilitarian™ with respect to prudence, which is a whole ’nother thing.
Huh. Shame.
… what’s “prudence” in your nomenclature? I haven’t seen the term as you use it.
If you are trying to violate someone’s rights, then your contextually relevant rights are forfeited. For example, the Nazi has forfeited the right not to be lied to.
I assume you can waive the right to abolish an agreement at will, too? That’s the foundation of contract law, but I don’t want to assume.
I’m not sure I understand.
What is this “don’t be a dick” function, and what place does it hold?
Well, some people are motivated to avoid being dicks, and might value information about how to do it. It’s not very ontologically special.
… what’s “prudence” in your nomenclature? I haven’t seen the term as you use it.
To me, it looks like consequentialists do prudence exclusively, and name it “morality”, instead of actually doing any morality. Prudence is arranging for things to be how you would like them to be.
If you are trying to violate someone’s rights, then your contextually relevant rights are forfeited. For example, the Nazi has forfeited the right not to be lied to.
Yes, I know. Hence my question:
You’ve said that you will do nothing, rather than violate a right in order to prevent other rights being violated. Yet you also say that people attempting to violate rights waive their rights not to be stopped. Is this rule designed for the purpose of allowing you to violate people’s rights in order to protect others? That seems unfair to people in situations where there’s no clearly identifiable moustache-twirling villain.
You see?
I’m not sure I understand.
Can I waive my right to un-waive a right? For example, if I waive my right not to work for someone, can I also waive my right to change my mind? As in, a standard contract?
I would assume so, but hey, it can’t hurt to ask.
Well, some people are motivated to avoid being dicks, and might value information about how to do it. It’s not very ontologically special.
Are you, personally, motivated to discourage dickish behaviour? You are, right? You mentioned you would sue someone for being a dick to you, if it were illegal, even if it was perfectly moral for them to do so.
To me, it looks like consequentialists do prudence exclusively, and name it “morality”, instead of actually doing any morality. Prudence is arranging for things to be how you would like them to be.
… as long as “how you would like them to be” doesn’t violate any rights. I think I see what you mean by that.
… ah, and “how you would like things to be” includes “no dickishness”, right?
I do not see; can you start that line of inquiry over completely for me?
Can I waive my right to un-waive a right?
I haven’t actually thought about this before, but my instinct is no. Although if you arrange for it to be really hard to communicate a change of mind, someone who went by their last communication with you might not be doing anything wrong, just making a sincere mistake.
Are you, personally, motivated to discourage dickish behaviour?
I try to minimize it from and around myself. I am not on a Global Dickishness Reduction Campaign of any kind. Maybe I should have said not that I’m not a consequentialist about it, but rather that I’m neither agent- nor recipient-neutral about it? How I would like things to be certainly refers to dickishness.
In cases where rights conflict, and there’s no alternative that doesn’t violate at least one, I privilege the null action. (I considered denying ought-implies-can, instead, but decided that committed me to the existence of moral luck and wasn’t okay.) “The null action” is the one where you don’t do anything.
You also say that attackers lose their contextually relevant rights, so you can violate their rights in order to defend others.
My original question was, doesn’t that feel like a patch to allow you to act like a consequentialist when it’s clear you have to?
Isn’t that unfair to people in situations where there is no attacker for you to focus on, where relieving their suffering is not a matter of taking out a convenient target?
Also, I just realized this, but doesn’t that mean you should be really concerned about any laws punishing things that don’t violate rights, since those criminals haven’t waived their rights under your system? For example, suing someone for violating your “right to privacy” by publicising a photo taken of you in a public place.
I haven’t actually thought about this before, but my instinct is no.
Huh. This universal right to change your mind about commitments seems like the most radical part of your philosophy (although obviously, you’re more tentative about endorsing it) - I noticed you endorsed the right of private individuals to secede from the state.
I am not on a Global Dickishness Reduction Campaign of any kind.
Yeah, you mentioned being a sort of virtue ethicist here … you would vote for (prudent) anti-dickishness laws, that sort of thing?
My original question was, doesn’t that feel like a patch to allow you to act like a consequentialist when it’s clear you have to?
No. It doesn’t feel consequentialist to me at all. It’s a patch, but it’s not a consequentialist-flavored patch.
Isn’t that unfair to people in situations where there is no attacker for you to focus on, where relieving their suffering is not a matter of taking out a convenient target?
Example?
Also, I just realized this, but doesn’t that mean you should be really concerned about any laws punishing things that don’t violate rights, since those criminals haven’t waived their rights under your system? For example, suing someone for violating your “right to privacy” by publicising a photo taken of you in a public place.
I am concerned about that, see elsewhere re: guaranteed exit, consent of the governed, etc. etc. blah blah blah.
you would vote for (prudent) anti-dickishness laws, that sort of thing?
If you specify that something’s prudent I’m pretty likely to vote for it even if it doesn’t affect dickishness in particular. Yay, prudence!
No. It doesn’t feel consequentialist to me at all. It’s a patch, but it’s not a consequentialist-flavored patch.
Yeah, I didn’t understand your position as well when I asked that.
Isn’t that unfair to people in situations where there is no attacker for you to focus on, where relieving their suffering is not a matter of taking out a convenient target?
Example?
How about tragedies of the commons, for example?
Also, I just realized this, but doesn’t that mean you should be really concerned about any laws punishing things that don’t violate rights, since those criminals haven’t waived their rights under your system? For example, suing someone for violating your “right to privacy” by publicising a photo taken of you in a public place.
I am concerned about that, see elsewhere re: guaranteed exit, consent of the governed, etc. etc. blah blah blah.
Most countries do not allow criminals to escape punishment by revoking their consent to be punished.
If you specify that something’s prudent I’m pretty likely to vote for it even if it doesn’t affect dickishness in particular. Yay, prudence!
Hmm, good point. Effective anti-dickishness, laws, then. (Not that I expect you to change your answer.)
Tragedies of the commons are a coordination problem. My system can cover them if there’s some kind of complex ownership or promise-keeping involved, but doesn’t handle implicit commonses.
Most countries do not allow criminals to escape punishment by revoking their consent to be punished.
Yeah, I’m aware.
I would vote for effective anti-dickishness laws all else being equal but might prioritize reduced dickishness below other things if there were tradeoffs involved.
Tragedies of the commons are a coordination problem. My system can cover them if there’s some kind of complex ownership or promise-keeping involved, but doesn’t handle implicit commonses.
Well, fair enough, as long as you’re aware of it.
Most countries do not allow criminals to escape punishment by revoking their consent to be punished.
Yeah, I’m aware.
I would vote for effective anti-dickishness laws all else being equal but might prioritize reduced dickishness below other things if there were tradeoffs involved.
So isn’t there an inconsistency here? Any law punishing something merely dickish, rather than rights violations, is itself a rights violation … right? Or can large-scale prudence outweigh a few minor rights violations?
ETA: That seems reasonable, honestly; even if it does taint the purity of your deontological rules, it only becomes important when determining large-scale policies, which is when you’d want a more precise accounting to shut up and multiply with.
Any law punishing something merely dickish, rather than rights violations, is itself a rights violation … right?
Blah blah guarantee of exit yada yada consent of the governed blah blah I’m really sick now of re-explaining that I am aware of the tensions in my view when it comes to governance and I’m not going to do it anymore.
Alicorn, you just acknowledged that most people being punished are not asked whether they consent to it.
Indeed, attempting to use one’s “guarantee of exit” in these situations is often itself a crime, and one carrying punishments you classify as “rights violations” if I understand you correctly.
That’s sort of why I commented on the potential issues this introduces?
People in the real world do not have guarantee of exit. I’m aware of that. I’ve been over this topic elsewhere in thread more times than I wish to have been.
So … I’m sorry, are you saying you’re actually against these laws, but where rather saying that you would be in favour of them in an ideal world? I appear to have misunderstood you somewhat, so perhaps these repetitions and rephrasing are not in vain.
Thank you for your patience, I know how frustrating it is dealing with inferential gaps and the illusion of clarity better than most :)
It has run out. There is no more patience. It has joined the choir invisible. I said so a couple comments upstream and got downvoted for it; please don’t ask me again.
Well, fair enough. While I’m disappointed not to be able to further improve my understanding of your beliefs, I treasure the LessWrong custom of tapping out of conversations that are no longer productive.
Have a nice day, and may you continue to improve your own understanding of such matters :)
(I think you were actually downvoted for missing the point of my response, by the way. I certainly hope that’s the reason. It would be a great shame if people started downvoting for “tapping out” statements.)
In fact, it’s precisely the opposite. The central feature of rapes we care about is the fact that they are extremely unpleasant, to put it politely. “Consent”, when formalized so that it no longer captures the information we care about, is noncentral.
Or at least, I think it is. In fact, I believe this should also be clear to Alicorn, on reflection (and indeed she has an explanation for why her system doesn’t fall into this trap.)
The central feature of rapes we care about is the fact that they are extremely unpleasant, to put it politely.
No, the central issue is in fact consent (there also also other issues related to sex and marriage but that discussion involves more inferential distance then I’m willing to bridge right now.) One way to see this is that it is still considered rape if the victim was unconscious and thus not capable of experiencing anything, pleasant or otherwise. Also, if someone consented to sex at the time but later decides she didn’t enjoy it, I assume you wouldn’t allow her to retroactively declare it rape.
“No. There’s morality, and then there’s all the many things that are not morality.”
Is this only a linguistic argument about what to call morality? With ,e.g. , virtue ethics claiming that all areas of life are part of morality, since ethics is about human excellence, and your claim that ethics only has to do with obligations and rights? Is there a reason you prefer to limit the domain of morality? Is there a concept you think gets lost when all of life is included in ethics (in virtue ethics or utilitarianism)?
Also, could you clarify the idea of obligations, are then any obligations which don’t emanate from the rights of another person? Are there any obligations which emerge inherently from a person’s humanity and are therefore not waivable?
Is this only a linguistic argument about what to call morality?
You could re-name everything, but if you renamed my deontological rules “fleeb”, I would go on considering fleeb to be ontologically distinct in important ways from things that are not fleeb. I’m pretty sure it’s not just linguistic.
Is there a reason you prefer to limit the domain of morality?
Because there’s already a perfectly good vocabulary for the ontologically distinct non-fleeb things that people are motivated to act towards—“prudence”, “axiology”.
Is there a concept you think gets lost when all of life is included in ethics (in virtue ethics or utilitarianism)?
Unassailable priority. People start looking at very large numbers and nodding to themselves and deciding that these very large numbers mean that if they take a thought experiment as a given they have to commit atrocities.
Also, could you clarify the idea of obligations, are then any obligations which don’t emanate from the rights of another person?
Yes; I have a secondary rule which for lack of better terminology I call “the principle of needless destruction”. It states that you shouldn’t go around wrecking stuff for no reason or insufficient reason, with the exact thresholds as yet undefined.
Are there any obligations which emerge inherently from a person’s humanity and are therefore not waivable?
“Humanity” is the wrong word; I apply my ethics across the board to all persons regardless of species. I’m not sure I understand the question even if I substitute “personhood”.
Lets take truth telling as an example. What is the difference between saying that there is an obligation to tell the truth, or honesty being a virtue or that telling the truth is a terminal value which we must maximize in a consequentialist type equation. Won’t the different frameworks be mutually supportive since obligation will create a terminal value, virtue ethics will show how to incorporate that into your personality and consequentialism will say that we must be prudent in attaining it? Similarly prudence is a virtue which we must be consequentialist to attain and which is useful in living up to our deontological obligations. and justice is a virtue which emanates from the obligation not to steal and not to harm other people and therefore we must consider the consequences of our actions so that we don’t end up in a situation where we will act unjust.
I think I am misunderstanding something in your position, since it seems to me that you don’t seem to disagree with consequentialism in that we need to calculate, but rather in what the terminal values are (with utilitarianism saying utility is the only terminal value and you saying hat there are numerous (such as not lying , not stealing not being destructive etc.))
By obligations which emerge from a person’s personhood which are not waivable, I mean that they emerge from the self and not in relation to another’s rights and therefore can not be waived. To take an example (which I know you do not consider an obligation, but will serve to illustrate the class since many people have this belief) A person has an obligation to live out their life as a result of their personhood and therefore is not allowed to commit suicide since that would be unjust to the self (or nature or god or whatever)
What is the difference between saying that there is an obligation to tell the truth, or honesty being a virtue or that telling the truth is a terminal value which we must maximize in a consequentialist type equation.
The first thing says you must not lie. The second thing says you must not lie because it signifies or causes defects in your character. The third thing says you must not lie unless there is a compensatory amount of something else encouraging you to lie. The systems really don’t fuse this prettily unless you badly misunderstand at least two of them, I’m afraid. (They can cooperate at different levels and human agents can switch around between implementing each of them, but on a theoretical level I don’t think this works.)
I think I am misunderstanding something in your position, since it seems to me that you don’t seem to disagree with consequentialism in that we need to calculate, but rather in what the terminal values are (with utilitarianism saying utility is the only terminal value and you saying hat there are numerous (such as not lying , not stealing not being destructive etc.))
I still don’t know what you mean by “emerge from the self”, but if I understand the class of thing you’re pointing out with the suicide example, I don’t think I have any of those.
Yes I read that post, (Thank you for putting in all this time clarifying your view)
I don’t think you understood my question. since “The third thing says you must not lie unless there is a compensatory amount of something else encouraging you to lie. ” is not viewing ‘not lying’ as a terminal value but rather as an instrumental value. a terminal value would mean that lying is bad not because of what it will lead to (as you explain in that post), but if that is the case, must I act in a situation so as not to be forced to lie. For example, lets say you made a promise to someone not to get fired in your first week at work, and if the boss knows that you cheered for a certain team he will fire you, would you say that you shouldn’t watch that game since you will be forced to either lie to the boss or to break your promise of keeping your job? (Please fix any loopholes you notice, since this is only meant for illustration)
If so it seems like the consequentialist utilitarian is saying that there is a deontological obligation to maximize utility, and therefore you must act to maximize that, whereas you are arguing that there are other deontological values, but you would agree that you should be prudent in achieving your deontological obligations.
(we can put virtue ethics to the side if you want, but won’t your deontological commitments dictate which virtues you must have, for example honesty, or even courage, so as to act in line with your deontological obligations)
That’s a very long paragraph, I’m going to do my best but some things may have been lost in the wall of text.
I understand the difference between terminal and instrumental values, but your conclusion doesn’t follow from this distinction. You can have multiple terminal values. If you terminally value both not-lying and also (to take a silly example) chocolate cake, you will lie to get a large amount of chocolate cake (where the value of “large” is defined somewhere in your utility function). Even if your only terminal value is not-lying, you might find yourself in an odd corner case where you can lie once and thereby avoid lying many times elsewhere. Or if you also value other people not lying, you could lie once to prevent many other people from lying.
a deontological obligation to maximize utility
AAAAAAAAAAAH
you should be prudent in achieving your deontological obligations
It is prudent to be prudent in achieving your deontological obligations. Putting “should” in that sentence flirts with equivocation.
won’t your deontological commitments dictate which virtues you must have, for example honesty, or even courage, so as to act in line with your deontological obligations
I think it’s possible to act completely morally acceptably according to my system while having whopping defects of character that would make any virtue ethicist blush. It might be unlikely, but it’s not impossible.
To make sure I understand you correctly. are these correct conclusions from what you have said?
a. It is permitted (i.e. ethical) to lie to yourself (though probably not prudent)
b. It is permitted (i.e. ethical) to act in a way which will force you to tell a lie tomorrow
c. It is forbidden (i.e. unethical) to lie now to avoid lying tomorrow (no matter how many times or how significant the lie in the future)
d. The differences between the systems will only express themselves in unusual corner cases, but the underlying conceptual structure is very different
I still don’t understand your view of utilitarian consequentialism, if ‘maximizing utility’ isn’t a deontological obligation emanating from personhood or the like, where does it come from?
A, B, and C all look correct as stated, presuming situations really did meet the weird criteria for B and C. I think differences between consequentialism and deontology come up sometimes in regular situations, but less often when humans are running them, since human architecture will drag us all towards a fuzzy intuitionist middle.
I don’t think I understand the last paragraph. Can you rephrase?
Why don’t you view the consequentialist imperative to always seek maximum utility as a deontological rule? If it isn’t deontological where does it come from?
The imperative to maximize utility is utilitarian, not necessarily consequentialist. I know I keep harping on this point, but it’s an important distinction.
Edit: And even more specifically, it’s total utilitarian.
Keep up the good work. Any idea where this conflation might have come from? It’s widespread enough that there might be some commonly misunderstood article in the archives.
I don’t know if it’s anything specific… classic utilitarianism is the most common form of consequentialism espoused on Lesswrong, I think, so it could be as simple as “the most commonly encountered member of a category is assumed to represent the whole category”.
It could also be because utilitarianism was the first (?) form of consequentialism to be put forth by philosophers. Certainly it predates some of the more esoteric forms of consequentialism. I’m pretty sure it’s also got more famous philosophers defending it, by rather a large margin, than any other form of consequentialism.
To me, it looks like consequentialists care exclusively about prudence, which I also care about, and not at all about morality, which I also care about. It looks to me like the thing consequentialists call morality just is prudence and comes from the same places prudence comes from—wanting things, appreciating the nature of cause and effect, etc.
Could you elaborate on what this thing you call “morality” is?
To me, it seems like the “morality” that deontology aspires to be, or to represent / capture, doesn’t actually exist, and thus deontology fails on its own criterion. Consequentialism also fails in this sense, of course, but consequentialism does not actually attempt to work as the sort of “morality” you seem to be referring to.
Doesn’t this deny the existence of rape within marriage? Isn’t it, in fact, the exact argument that was used to oppose laws prohibiting rape within marriage?
Yes, and do you have an reason why this is in fact not a valid conclusion? Or this is an appeal to what the law happens to say today?
I, personally, find this situation morally repugnant. Psychological unity of mankind leads me to hope Alicorn does too. What more justification could you ask?
However, even though signing a contract does not seem to remove the harm of rape, I of course cannot rule out the possibility that I am picturing the situation incorrectly, or that that the benefits would not outweigh the rape. (Yes, Alicorn has stated that they care about harms outside their framework of rights.)
Alicorn, on the other hand, likely already holds the standard opinion on rape (it is bad), and thus would find a certain inconsistency in endorsing a position that was OK with it. So in that sense, yes, the law today is valid evidence that this might be an issue, if one looks at the causal change that led up to it changing.
I, personally, find this situation morally repugnant. Psychological unity of mankind leads me to hope Alicorn does too. What more justification could you ask?
Well, the fact that these laws were only passed very recently suggests that it is you who is out of step with the psychological unity.
So in that sense, yes, the law today is valid evidence that this might be an issue, if one looks at the causal change that led up to it changing.
The context appears to be a moral panic about rape that among other things argues for despising with due process for accused rapists, and that if two drunk people have sex and regret it later this means the man raped the woman. So no, the law today is not in fact valid evidence.
Well, the fact that these laws were only passed very recently suggests that it is you who is out of step with the psychological unity.
I was relying on the framing; obviously I wouldn’t expect people to respond the same way in literally any context. (You’re right, I didn’t make that clear.)
The context appears to be a moral panic about rape that among other things argues for despising with due process for accused rapists, and that if two drunk people have sex and regret it later this means the man raped the woman. So no, the law today is not in fact valid evidence.
Hmm. It is true that rapists are demonized, and this is sometimes extended past edge cases—but obviously, you are yourself relying on the fact that this is obvious nonsense to most people for your rhetorical point.
This seems more akin to similar affects that spring up around other major crimes—not that that makes it rational, of course, or implies that the laws genuinely have greater expected utility than their inverse.
I have no idea how to interpret this even semi-charitably. To me this translates to “I was relying on dark arts”.
It is true that rapists are demonized, and this is sometimes extended past edge cases—but obviously, you are yourself relying on the fact that this is obvious nonsense to most people for your rhetorical point.
I was relying/hoping that you weren’t sufficiently caught up in the panic to no longer recognize this as obvious nonsense. My point that relying on laws that have only been passed in the previous decade, especially given that there’s a moral panic involved, to be highly dubious.
This seems more akin to similar affects that spring up around other major crimes
Yes, similar effects have sprung up around other major crimes in the past. However, I believe that rapists is the moral panic du jour.
I have no idea how to interpret this even semi-charitably.
It’s less likely that someone will ignore facts that have been recently brought to their attention. You’re right, I wasn’t quite sure what word to use there, I may have screwed up.
you are yourself relying on the fact that this is obvious nonsense to most people for your rhetorical point.
I was relying/hoping that you weren’t sufficiently caught up in the panic to no longer recognize this as obvious nonsense.
With respect, have you ever said that to someone and had them respond “Well yeah, sure! Of course women can retroactively make anyone they’ve ever had sex with a rapist.”
I’m sure you’ve seen people endorse equivalent conclusions via deceptive wording, but the general response I would predict from all but a very small core would be “we don’t believe that!”
This seems more akin to similar affects that spring up around other major crimes
Yes, similar effects have sprung up around other major crimes in the past. However, I believe that rapists is the moral panic du jour.
Hmm, perhaps. I would have gone for pedophiles myself (a term already spreading to include all creepily large age gaps, by the way), but this isn’t really a contest.
Psychological what of what? You mean “current societal norms based on a multitude of shifting cultural and biological determinants, subject to change”?
The belief is not universal. The ability to empathise with rape victims is (well, actually, we define our terms so as to exclude psychopaths and the like.) Also, yes, I share certain cultural assumptions and conventions, so I have reason to think Alicorn may respond the same way.
My model predicted Alicorn would react to this specific question about that belief, after living her sort of life, with a reluctance or outright refusal to bite the bullet and endorse rape. Not that every human ever would unilaterally endorse my particular belief about rape.
[Kawoomba—you have no way of knowing this—strenuously objects to my model of human nature, since it predicts human CEV coheres rather well, whereas they believe that ethics are (and should be?) entirely relative and largely determined by one’s circumstances. They like to jump on anything I say that even vaguely implies human morality is somehow universal. There are some quite comprehensive discussions of this point scattered throughout LessWrong.]
My model predicted Alicorn would react to this specific question about that belief, after living her sort of life, with a reluctance or outright refusal to bite the bullet and endorse rape.
Whether the activity in question constitutes “rape” is precisely the question under discussion.
Actually, I don’t doubt that there are many characteristics that apply to a vast majority of humans, thanks e.g. to mirror neurons et al. As such, basing predictions on such common factors is quite valid. For example, predicting that someone who doesn’t eat for extended periods will experience a state called “being hungry”, and will show certain behavioral characteristics associated with that state.
I just dislike the term because it’s typically used not as a descriptor (“many humans show a propensity for X”, in the example: “if you don’t eat for a period of time, you will probably act in accordance to “being hungry”) but (invalidly) as prescriptive (“if you don’t eat for a period of time, you should act in accordance to “being hungry”). Getting an “ought” from an “is”, and all that.
I can predict with reasonably confidence that all participants in the current discussion are currently wearing clothes. Based on the “garmental unity of mankind”. Doesn’t mean they should.
If PUoM means there are shared desires, then the only way you could fail to get the “ought” from the “is”, is by denying that “oughts” have anything to do with desires, surely,
For one, if there are supposedly universally shared desires among a group of agents, and yet you find one agent among them who doesn’t share those, you’ve found a contradiction—those universally shared desires weren’t universally shared, after all.
For two, describing that a majority (or even a supermajority, or a super-duper-overwhelming majority) shares certain desires would be a description, not some majority-tyrannical edict for the outliers to conform to. For example: Stating that a majority of humans share heterosexual desires doesn’t mean that those who don’t somehow should, as well (just to go with the obvious applause-light example which is applicable in this case, it’s easy to come up with arbitrarily many other examples).
Facts about desires coinjoined with with an intuitively appealing Maxim, such as “everyone ought to maximize the satisfaction of everyone’s” desires” can imply oughts.
IANAlicorn, but, since I have the same belief, I’ll give it a shot. My imperfect introspection tells me that, since the world where people don’t have rights would quickly become unfair and full of suffering (and this has been repeatedly experimentally tested), I want to live in a world where I, my family or someone I can identify with would have less of a chance of being treated unfairly and made to suffer needlessly. Pretending that people have “unalienable rights” goes a long way toward that goal, so I want to believe it and I want everyone else to believe it, too. To dig deeper, I am forced to examine the sources for my desire for fairness and the origins of my empathy (imperfect though it is), and the available literature points to the mix of genetics and upbringing.
I want to live in a world where I, my family or someone I can identify with would have less of a chance of being treated unfairly and made to suffer needlessly. Pretending that people have “unalienable rights” goes a long way toward that goal, so I want to believe it and I want everyone else to believe it, too.
That sounds like rule utilitarianism, or a rule utilitarianism-like consequentialism, not like a deontological justification for human rights.
I suppose you are right. However, if you skip the introspection part, “people have rights” makes sense in most cases without having to worry about utilities. It’s the edge cases, like the trolley problem, which require deeper analysis.
A decent justification, but not very deontological. What I was curious about is how Alicorn determines what rights exist purely deontologically, without reference to consequences.
Since I’m no longer maybe going to write a thesis on it, mostly I don’t work on this a lot. Not lying, not stealing, and not attacking people does pretty good for everyday. There’s sort of an informal checklist when I think something might be a right—the rights have to be reasonably consistent with each other, they’re overwhelmingly negative rights and not positive ones, simpler ones are better, etc. This would be easier with a sample maybe-a-right but I haven’t examined any of those recently.
This would be easier with a sample maybe-a-right but I haven’t examined any of those recently.
If I may offer one —
Suppose that I am photographed on the street outside a place that has a bad reputation (with some people). The photographer might publish the photo, which could lead viewers to believe bad things of me.
One acquaintance of mine, M, claims that I have a right to forbid the photographer from publishing this photo; I have the right to control publicity about me or the use of my image, even though the picture was taken in public.
Another acquaintance, B, claims that the photographer has a freedom-of-speech right to publish it, so long as they do not explicitly say anything false about me. B believes that it would be nice of the photographer to ask my permission, but that I do not have a right to this niceness.
Still another acquaintance, R, says that it depends on who I am: if I am the mayor of Toronto, I have no right to control photos of me, since my actions are of public interest; but if I am a privately employed engineer of no public reputation, then I do have that right.
Okay, I’ll walk through my process of apprehending and making a call on this situation. It looks like a good example, thanks for coming up with it.
The conflict here is between you, and the photographer—other persons in the story have opinions but aren’t directly involved. The steps are that there was an opportunity to ask you if it was okay to photograph you (which the photographer passed over), the decision to photograph you, the opportunity to ask you if it’s okay to publish it (which the photographer also passes over), and the decision currently at hand of whether to publish the photo. If there’s a rights violation potential or actual, it’s probably in one of those places. The statement of the problem doesn’t suggest that you’ve committed any rights violations by happening to be in this location.
The fact that two chances to ask for consent have been passed up is suspicious. It’s not a guarantee that something has gone wrong—the photographer is allowed to look at you without securing permission, for instance—but it’s a red flag. In the normal course of things, people waive some of their rights when asked explicitly or by other mechanisms all the time. People waive their right to refuse sex, for instance, on a per-occasion basis in order to have non-rape sex.
You don’t actually do anything in this story, except exist in a place that by stipulation you are quite entitled to be, so only the photographer might be committing a wrong here.
So the obvious candidate possibilities are that you have a right not to be photographed, or that you have the right not to have your likeness publicized, or that you have no such rights and the photographer may do as they like with the photograph provided no other wrongs (such as libel, a form of lying) are committed in so doing.
But earlier I said the photographer is allowed to look at you without permission. You’re in a public place, where anyone, including photographers as an unspecial class of anyone, may walk by. The upshot of a photograph is that others, too, may look at you. Any of them could have walked by in real time and seen you without wronging you. There’s no obvious mechanism by which a gap in time should change things. If one of the passersby had a photographic memory, that wouldn’t change the fact that they could look at you. Similarly, the fact that people who live far away from this location might not have had a chance to show up in person and espy your presence, or the information that anything going on might be of interest, doesn’t seem like it has anything to do with anything.
So it seems to me that the photographer is probably, at worst, being a dick. You do not have a right to prohibit someone from photographing you and publishing a photo of you unless something else is going on. (I feel like I should mention now that I hate this result. I don’t photograph well and definitely don’t like the idea of my likeness being used in any which way without my agreement. In fact, if the law allowed, I might even pursue someone who did this to me via legal routes, which—I hope obviously—are separate from ethical condemnation. In any event I’m not rigging this to suit myself.)
But supposing it were the other way around, your acquaintance R might still have a point. Assuming it’s known that elected politicians are treated differently with respect to the use of their photographs, pursuing a career as a politician might constitute waiving one’s right not to be photographed (if we had that right, which I concluded above we probably don’t). In this counterfactual, this would apply to the mayor of Toronto, but not people who became public figures without doing anything wrong (that would be a potential rights forfeiture, particularly if they are a threat to the photograph-viewing population) or choosing to become public figures on purpose.
Ok, this is quite a detailed response and I appreciate the thought that went into writing it. However, from my perspective, it raises more questions than it answers. For example, you say things like this:
…the photographer is allowed to look at you without permission … If one of the passersby had a photographic memory, that wouldn’t change the fact that they could look at you … You do not have a right to prohibit someone from photographing you and publishing a photo of you unless something else is going on...
But why not ? I’m not asking this just to be a contrarian. For example, many people believe that all of us have a fundamental right to privacy; this would imply that you do, in fact, have the right to “prohibit someone from photographing you and publishing a photo of you”. Presumably you disagree, but on what basis ?
Furthermore, you say that you
…might even pursue someone who did this to me via legal routes, which—I hope obviously—are separate from ethical condemnation.
I don’t see how that works. If you believe that a person has a right to photograph you and publish the photo without your permission; and yet you are launching a legal challenge against him for doing so; then are you not engaging in an immoral attack ? Sure, it’s not a physical attack, but it’s an attack nonetheless, and previously you stated (assuming I understood you correctly) that attacks on people who have violated no rights are immoral.
Can you tell me where I lost you in the description of why I do disagree?
If you believe that a person has a right to photograph you and publish the photo without your permission
No. This isn’t how the right is framed. They don’t have a right to do it; no one must behave in a way to protect their ability to do so; I don’t have to stand still so they can get a clear shot, I don’t have to go out in public, if I happen to own a news outlet I don’t have to give them a platform, if I acquire a device that makes me look like a smudge of static to nearby cameras I can carry it around without this being morally problematic. (Perhaps unless I’m under some sort of agreement to be visible to security cameras, or something like that.) I just don’t have the right to not be photographed. Remember that rights are overwhelmingly negative. The fact that someone commits no wrong by doing X does not imply that others commit a wrong by making X inconvenient or impossible.
(You’re also being kind of overbroad in understanding my use of the word “attack”, which was intended broadly, but not so broadly as to include “seeking legal recompense in response to an upsetting, by stipulation illegal, behavior which does not happen to constitute a violation of any intrinsic moral rights”.)
Can you tell me where I lost you in the description of why I do disagree?
You state that “you do not have a right to prohibit someone from photographing you”, but I don’t understand where this rule comes from. You expand on it in your explanation that follows, but again, I don’t fully understand your reasoning. You say:
Remember that rights are overwhelmingly negative. The fact that someone commits no wrong by doing X does not imply that others commit a wrong by making X inconvenient or impossible.
That makes sense to me, in that your rule is consistent with the rest of your system. I may even agree that it’s a good idea from a consequentialist point of view. However, I still do not understand where the rule comes from. Is photographing you qualitatively different from murdering you (which would presumably be immoral), and if so, why ? Come to think of it, why are all rights negative ?
You’re also being kind of overbroad in understanding my use of the word “attack”, which was intended broadly, but not so broadly as to include “seeking legal recompense in response to an upsetting, by stipulation illegal, behavior which does not happen to constitute a violation of any intrinsic moral rights”.
I may have misunderstood your goals. In launching a legal challenge, what do you hope to accomplish in terms of “recompense” ? Are you seeking to extract a fine from the photographer, or perhaps to restrict his freedom in some way, or both ?
Let’s say that you seek a fine, and you succeed. How is that different, morally speaking, from breaking into his house and stealing the money ? In one case you use a lockpick; in the other one you use a lawyer; but in the end you still deprive the photographer of some of his money. Why does one action count as an attack, and the other does not ?
Now that I think of it, perhaps you would consider neither action to be an attack ? Once again, I’m not entirely sure I understand your position.
Come to think of it, why are all rights negative ?
Are all rights negative? What about, say, the right to life, or the right to -not-starving-to-death?
Many people seem pretty jazzed about the idea of a “right to marriage” or a “right to insert-euphemism-for-abortion-here”, based largely on the fact that our (as in, their and my) tribe considers the policies these imply applause lights. I have no idea what Alicorn thinks of this, though.
I’m fine with that kind of right existing in the legal sense and encourage all the ones you listed. I don’t think anyone has a fundamental moral obligation to feed you or perform abortions for you or conduct a marriage ceremony for you, though you can often get them to agree to it anyway, empirically, with the use of money.
If I may, I’m curious on what basis you consider those rights a good idea? Is it just a whim? Are you worried real rights might be violated?
I’m not usually in favour of calling all these various things “rights”, since it rather confuses things—as you’re probably aware—but I must admit the “right to not-starving-to-death” sounds important.
Are you saying you would be OK with letting people starve to death? Or am I misunderstanding?
I think those rights-in-the-legal-sense are good for political and social reasons. Basically, I think they’re prudent.
I don’t think I am doing something wrong, right now, by having liquid savings while there exist charities that feed the hungry, some of whom are sufficiently food-insecure for that to make a difference to their survival. I bite the bullet if the starving person happens to be nearby: this doesn’t affect their rights, and only rights have a claim on my moral obligations. I might choose to feed a starving person. I will support political policy that seems like it will get more starving people fed. I will tend to find myself emotionally distraught on contemplating that people are starving, so I’d resent the “OK with it” description. Also, when I have children, they will have positive claims on me and I will be morally obligated to see to it that they are fed. Other than that? I don’t think we have to feed each other. It’s super-erogatory.
I think those rights-in-the-legal-sense are good for political and social reasons.
I see. And what are those reasons?
I don’t think I am doing something wrong, right now, by having liquid savings while there exist charities
Yeah, that’s a common problem with consequentialists. Obviously, we have various instincts about this and it’s both hard and dangerous to ignore them.
I bite the bullet if the starving person happens to be nearby: this doesn’t affect their rights, and only rights have a claim on my moral obligations.
I’m actually somewhat pleased to hear that, because it’s not the first time a deontologist has told me that. I was too speechless to respond, and they changed their mind before I could find out more.
I might choose to feed a starving person. I will support political policy that seems like it will get more starving people fed. I will tend to find myself emotionally distraught on contemplating that people are starving, so I’d resent the “OK with it” description.
Ah, here we go. That’s good!
You do realize that sort of thing is usually part of what’s referred to by “morality”? So leaving it out seems … incomplete.
Postscript:
I’m not sure, but it may be that there’s something causing some confusion, by the way. I’ve seen it happen before in similar discussions.
There seem to be two functions people use “morality” for—judging people, and judging actions.
Consequentialists, or at least standard-lesswrong-utilitarian-consequentialists, resolve this by not judging people, except in order to predict them—and even then, “good” or “bad” tend to be counterproductive labels for people, epistemically.
Instead, they focus entirely on judging their actions—asking, which of my options is the correct one?
But I gather you (like a lot of people) do judge people—if someone violates your moral code, that makes them an acceptable casualty in defending the rights of innocents. (Although you’re not a total virtue ethicist, who only judges people, not actions; the polar opposite of LW standard.)
I’m not really going anywhere with this, it just seems possibly relevant to the discussion at hand.
I think those rights-in-the-legal-sense are good for political and social reasons.
I see. And what are those reasons?
I’m gonna decline to discuss politics on this platform. If you really want to talk politics with me we can do it somewhere else, I guess.
I would tend to describe myself as judging actions and not people directly, though I think you can produce an assessment of a person that is based on their moral behavior and not go too far wrong given how humans work.
Oh! Oh, OK. Sorry, I just assumed those were so vague as to avoid mindkilling. Of course we shouldn’t derail this into a political debate.
I would tend to describe myself as judging actions and not people directly
In fairness, you’re clearly smart enough to disregard all the obvious mistakes where we make prisons as awful as possible (or, more likely, just resist making them better) because we Hate Criminals.
This is a more abstract idea, and much more limited. I’m not criticising it (here). Just noting that I’ve seen it cause confusion in the past, if left unnoticed.
Actually, what the heck, while I’m here I may as well criticize slightly.
This is little more than my first thought on reviewing what I think I understand as your moral system.
It’s discovered Hitler’s brain was saved all those years ago. Given a giant robot body by mad scientists, he is rapidly re-elected and repurposes the police force into stormtroopers for rounding up Jews.
You, being reasonably ethical, have some Jews hidden in your house. Since the stormtroopers have violated relevant rights (actually, they could be new on the job, but you can just tell they’re thinging about it—good enough), so their relevant right not to be lied to is waived and you tell them “no Jews hidden here!” quite cheerfully before shutting the door.
However, naturally, they know Baye’s theorem and they’ve read some of your writings, so they know you’re allowed to lie and your words aren’t evidence either way—although silence would be. So they devise a Cunning Plan.
They go around next door and talk to Mrs. Biddy, a sweet old lady who is, sadly, unaware of the recent political shift. She was always told to respect police officers and she’s not going to stop now at her age. The white-haired old grandmother comes around to repeat the question the nice men asked her to.
She’s at the door, with the two stormtroopers standing behind her grinning. What do you do?
I mean, obviously, you betray the Jews and they get dragged off to a concentration camp somewhere. Can’t lie to the innocent old lady, can’t stay silent because that’s strong Bayesian evidence too.
But you’re reasonably intelligent, you must have considered an isomorphic case when constructing this system, right? Casualties in a Just War or something? Two people manipulated into attacking the other while dressed as bears (“so the bear wont see you coming”)? Do you bite this bullet? Have I gone insane from lack of sleep? I’m going to bed.
Why are all these hypothetical people so well-versed in one oddball deontologist’s opinions? If they’re that well-read they probably know I’m half Jewish and drag me off without asking me anything.
Mrs. Biddy sounds culpably ignorant to me, anyway.
You may or may not have gone insane from lack of sleep.
In fairness, you’re clearly smart enough to disregard all the obvious mistakes where we make prisons as awful as possible (or, more likely, just resist making them better) because we Hate Criminals.
Um, the purpose of prisons is to punish criminals, so yes, prisons should be awful, not necessarily “as awful as possible”, but for sufficiently serious crimes quite possibly.
EDIT: Wait, you mean “punish” in the consequentialist sense of punishing defection, right?
Yes, but this does not imply that the current level of awfulness is optimal. It certainly does not mean we should increase the awfulness beyond the optimal level.
But if someone proposes that the current level is too high (whether on consequentialist or legal grounds), one of the arguments they will encounter is “you want to help rapists and murderers?! Why? Those bastards deserve it.”
(The consequentialist argument for, say, the current state of US prisons is of course undercut by the existence of other countries with much less awful prisons.)
The consequentialist argument for, say, the current state of US prisons is of course undercut by the existence of other countries with much less awful prisons.
If you want to look at optimal awfulness, there is a much better way to test, look at the crime rate. The currant crime rate is extremely high by historic standards. Furthermore, the recent drop from its peak in the 1970′s has accomplished by basically turning major cities into Orwellian surveillance states. I think increasing the awfulness of prisons would be a better solution, at the very least in puts the burden on the criminals rather than the innocent.
That really isn’t a good argument for the current state of US prisons, is it? Clearly, even openly allowing institutional rape has failed to help; yet other, less harsh countries have not seen soaring crime rates by comparison.
I’ve seen studies suggesting that certainty of punishment is much more important for determining behavior than the extremity of it—it’s more a question of a strong justice system, a respect for authority (or fear, one might say), than people performing expected utility calculation in their heads.
It is known that lots of people enjoy inflicting pain on the helpless. Anyone who punishes prisoners because they enjoy doing so is in a conflict of interest, at least if he has any discretion in how to carry out the punishment.
I don’t know if it’s more so because comparing degrees here is hard, but I would say that we should not hire prison guards who enjoy punishing prisoners and have discretion in doing so.
You can rephrase “punishing criminals” in terms of quasi-consequentialist decision theory as deterrent/counter-factual crime prevention. Al the other reasons I’ve heard are little more than rationalizations by people who want to punish/deter criminals but feel icky about the word “punishment”.
What possible reasons there could plausibly be for jailing people, and what actually in fact motivates most people to support jailing people, are not the same thing.
Some possibilities for the former include:
Retribution (i.e., punishing criminals because they deserve it)
Closure/satisfaction for the victim(s), or for family/friends of the victims(s).
Deterrence, i.e. protecting society from counterfactual future crimes we expect other people to otherwise perpetrate.
Protecting society from counterfactual future crimes we expect this same criminal to otherwise perpetrate.
Rehabilitation.
… (other things I am not thinking of at the moment)
None of those things are the same as any of the others. Some fit the rather imprecise term “punishment” closely (1, 2), others not so closely (3, 4), still others not at all (5).
I would argue that (1) and (2) are in fact the same thing just formulated at different meta-levels, and that (3) and (4) are the quasi-consequentialist decision theory “translations” of (1) and (2). Rehabilitation (5) is what I called a fake reason, as can be seen by the fact that the people promoting it are remarkably uninterested in whether their rehabilitation methods actually work.
(3) and (4) are the quasi-consequentialist decision theory “translations” of (1) and (2)
I’m not entirely sure what you mean by this. Are you suggestions that people who advocate (3) and (4) as actual justifications for having prisons do not have those things as their true, internal motivations, but are only claiming them for persuasion purposes, and actually (1) and/or (2) are their real reasons? Or are you saying something else?
Rehabilitation (5) is what I called a fake reason as can be seen by the fact that the people promoting it are remarkably uninterested in whether their rehabilitation methods actually work.
That may well be, but that doesn’t make it not an actual good reason to have prisons.
Your comment which prompted me to start this subthread spoke about what should be the case. If you say “this-and-such are the actual motivations people have for advocating/supporting the existance of prisons”, fine and well. But when you talk about what should happen or what should exist, then people’s actual internal motivations for advocating what should happen/exist don’t enter into it.
I’m not entirely sure what you mean by this. Are you suggestions that people who advocate (3) and (4) as actual justifications for having prisons do not have those things as their true, internal motivations, but are only claiming them for persuasion purposes, and actually (1) and/or (2) are their real reasons? Or are you saying something else?
Something else, see my reply to hen. For where I go into more detail about this.
But when you talk about what should happen or what should exist, then people’s actual internal motivations for advocating what should happen/exist don’t enter into it.
See hen’s comment for the problem I have with rehabilitation.
With respect, both hen’s comment and your reply read to me like nonsense. I can neither make sense of what either of you are saying, nor, to the degree that I can, see any reason why you would claim the things you seem to be claiming. Of course, I could merely be misunderstanding your points.
However, I think we have now gone on a tangent far removed from anything resembling the original topic, and so I will refrain from continuing this subthread. (I’ll read any responses you make, though.)
I think Eugine_Nier might be trying to say that the reason we evolved the emotions of anger and thirst for vengeance is because being known to be vengeful (even irrationally so) is itself a good deterrent. And possibly that this therefore makes these the same thing. But I’m not sure about that because that seems to me like a straightforward case of mixing up adaptation executors and fitness maximizers.
To see what I mean by the dignity of moral agents think of a criminal as a moral agent, rather then a defective object to be fixed. The idea of rehabilitation should acquires a certain Orwellian/totalitarian aura, i.e., this is the kind of thing the Ministry of Love does.
As for my statement about deterrence and retribution, I believe we’re having that discussion here.
A datapoint: I think the purpose of prisons is the institutional expression of anger, and insofar as they do this, they are an expression of respect for the criminal as a moral agent. In fact, I think that the use of prisons as a deterrent or to modify behavior is downright evil: you’re not allowed to put people in a box and not let them out just to change the way they act, and especially not to communicate something to other people.
(For the record, it looks like you may not be a consequentialist, but it seems worth asking.)
I think that the use of prisons as a deterrent or to modify behavior is downright evil: you’re not allowed to put people in a box and not let them out just to change the way they act, and especially not to communicate something to other people.
Um … why not? I mean, when we all agree it’s a good idea, there are reasonable safeguards in place, we’ve checked it really does reduce rapes, murders, thefts, brutal beatings … why not?
I think the purpose of prisons is the institutional expression of anger, and insofar as they do this, they are an expression of respect for the criminal as a moral agent.
Is it OK to lock someone in a box because you’re angry? Isn’t that, in fact, evil? Does it become OK if you “respect” them (I’m not sure what this refers to, I admit.)
I more-or-less agree with your world view, with the caveat that I would interpret contrafactual crime prevention as anger translated into decision theory language (it helps to think about the reason we evolved the emotion of anger). Deterrent as applied to other people is a version of the contrafactual crime prevention where we restrict our thinking to other people in this event branch as opposed to all event branches.
I bite the bullet if the starving person happens to be nearby: this doesn’t affect their rights, and only rights have a claim on my moral obligations.
I’m actually somewhat pleased to hear that, because it’s not the first time a deontologist has told me that.
To a VNM consequentialist in every situation there is a unique “best action”, my contrast for a deontologist or virtue ethicist their morality doesn’t specify a single action to take. Thus you are allowed (and possibly encouraged) to help the starving man, but aren’t required to.
Well, since Alicorn’s system does not take account of that, this is in any case biting a bullet for you as well.
[With that acknowledged, I am curious about those intuitions of yours. Is this about punishing defection? The standard “well, if they’re a Bad Person, they deserve what’s coming to them”? Or more “it’s their own fault, they made their bed let them lie in it, why should we be responsible for their foolishness”, that sort of thing?]
Well, since Alicorn’s system does not take account of that
As you may have noticed, I’m not Alicorn.
Is this about punishing defection? The standard “well, if they’re a Bad Person, they deserve what’s coming to them”? Or more “it’s their own fault, they made their bed let them lie in it, why should we be responsible for their foolishness”, that sort of thing?
Both, also I have some more examples, that could fall under one or both depending on how one defines “defection” and “foolishness”. If someone decided that they’d rather not work and rely on my charity to get food, they won’t be getting my charity. Also if CronoDAS comes by my house begging for food, the answer is no.
Another example, is that my response to the famous train dilemma depends on what the people were doing on the track. If they were say picking up pennies, I’m letting them get run over.
Well, since Alicorn’s system does not take account of that
As you may have noticed, I’m not Alicorn.
Well … yeah. Because you’re replying to something I said to Alicorn.
Both, also I have some more examples, that could fall under one or both depending on how one defines “defection” and “foolishness”. If someone decided that they’d rather not work and rely on my charity to get food, they won’t be getting my charity. Also if CronoDAS comes by my house begging for food, the answer is no.
Is this for game-theoretic reasons, or more of a virtue-ethics “lazy people don’t deserve food” thing?
Another example, is that my response to the famous train dilemma depends on what the people were doing on the track. If they were say picking up pennies, I’m letting them get run over.
Are we killing people for stupidity now? I mean, I guess if the numbers were equal, the group drawn from the general population is a better bet to save than the group selected for “plays on train tracks”—but I don’t think that’s what you meant.
Wait, is this a signalling thing? Y’know, sophisticated despair at the foolish masses? If it is, there’s no need to reply to this part; I’ll drop it.
Another example, is that my response to the famous train dilemma depends on what the people were doing on the track. If they were say picking up pennies, I’m letting them get run over.
Are we killing people for stupidity now?
Did you take click on my link? “Picking up pennies on railroad tracks/in front of a steam roller” is a well known metaphor for taking certain types of risks in economic circles.
However, to answer your question: no, I (normally) won’t kill someone for his stupidity, but I see no reason to save them, and certainly no reason to kill other people to save them.
why don’t you care about the suffering and death of someone “stupid”
Why should I?
Would you prefer that others care about your suffering and death, if something happened such that you became (temporarily or permanently) “stupid”?
If they chose to take that kind of risk, they are responsible for its consequences.
In many cases, people are not aware of the risks they are taking; in many other cases, people may not have less-risky alternatives. Should they still be entirely responsible for their consequences? Because that seems to lean towards “just-world hypothesis” thinking, and if that’s where this is going, we may want to just go there and be done with it.
Would you like to be the innocent bystander sacrificed to save an idiot from the consequences of his own stupidity.
Me in particular, or people in general? Because there is a particular class of idiot that most people would GLADLY be sacrificed to save; they’re called “children”.
As for me, personally, that depends on the calculus. Am I saving one idiot, or ten? Are they merely idiotic in this circumstance, or idiotic in general (i.e., in most situations a normal human being might reasonably find themselves in)? Are we talking about a well-medicated version of me with a good chance of contributing meaningfully to society, or a cynical, hopeless, clinically depressed version of me that would gladly take ANY reason to die? Because I think at this point, we’re talking about weighted values, and I quite imagine that there’s a certain number of certain kinds of idiots that I would absolutely consider more worth saving than certain versions of myself, if I was doing the calculus honestly.
And if I’m not doing the calculus honestly, then I’m an idiot.
People do not choose to be children. People do choose to be careless or to refuse to learn. Idiocy that is caused by carelessness or refusal to learn is therefore the person’s fault.
In the unlikely case of someone who has, for instance, been infected by nanobots that force his brain to act carelessly, I would of course not hold him to blame.
In the unlikely case of someone who has, for instance, been infected by nanobots that force his brain to act carelessly, I would of course not hold him to blame.
As opposed to, say, just a reduced capacity for impulse control or learning? Or an ingrained aversion to thinking before acting?
I don’t give children a free pass. If an adult is sufficiently incompetent, I wouldn’t blame him, either.
However, I would not classify an adult as sufficiently incompetent for these purposes unless his impulse control is so bad that he can’t safely live on his own. (There is no inconsistency between this and considering children incompetent, since children cannot safely live on their own either.)
In the example given, I think if people are incompetent enough to risk themselves physical injury or death for the sake of picking up pennies, that’s pretty good evidence that they can’t safely live on their own without supervision.
If they managed to survive long enough to get to the railroad track to pick up the pennies, they’re probably able to live on their own without supervision unless there was an extreme stroke of luck involved (such as having been released from custody fifteen minutes ago).
My feelings about governments are complicated by the guarantee-of-exit thing I mentioned elsethread, but with that understood, I’m not opposed on any level to systematic taxation. If a government were rounding people up to work in agriculture or soup kitchens or what have you against their will, that would be wrong.
No; there doesn’t have to be a society that wants you, or for that matter one that is agreeable to your preferences.
is taxation a violation of people’s rights? If not, why not?
I don’t think so. I think failing to provide guarantee-of-exit is a failing on the part of various governments and it does make some things they do less defensible, but I’m not opposed to taxes. Part of it is actually that it’s not a person collecting taxes.
I’m pretty sure the overwhelming majority of taxes are not collected in the tax-collector-based way depicted in the Disney version of “Robin Hood”. I do object when force comes to be involved. (I don’t have any suggestions on what to do about it. Something being wrong doesn’t, actually, stop anyone from doing it.)
They’re not collected in the tax-collector-based way because there’s no need to—there’s enough of a threat of force to get people to comply. If it’s a credible threat, the government would use force on non-compliers, presumably thus violating their rights. As you said, something being wrong doesn’t stop anyone from doing it—but it does license you to say that they shouldn’t do it, and it licenses the victims to resist.
Okay. Elsewhere in thread when I was walking through the photography example, I said that if there were a right to not be photographed but it were generally known that the customs were different for public figures, becoming a public figure on purpose might constitute consent. This is why I think guaranteed exit is so important—if it were in place, you could move to whatever country had the taxation setup you could best tolerate if they’d have you, and that would be that.
Even without guaranteed exit, countries can have a price of admission, though. (Sort of like even if there is no universal healthcare, your doctor can charge, and even if there is no food bank, so can the grocery store.)
I really doubt that anyone is waiting for me to license them to tax dodge or pick fights with cops.
This assumes that staying implies consent, which is a questionable assumption. It presupposes that the State has the right to do whatever it wants on its territory as long as it lets people leave (even if the only other state in the world is even more authoritarian). For example, if half of the world were ruled by North Korea and the other half by China, would you say that China’s policies were morally justified because people would be free to leave and move to North Korea?
I really doubt that anyone is waiting for me to license them to tax dodge or pick fights with cops.
No, but they may like morality to license them to avoid taxes or resist cops. (Although I do like the image of someone thinking, “Damn, taxes suck, if only that person who wrote that Twilight fanific said I don’t have to pay them.”)
This assumes that staying implies consent, which is a questionable assumption.
No kidding it’s questionable, hence my thing about guaranteed exit. But likewise the various agents of the government do not necessarily consent to freeloading. If the Red Cross puts out juice and cookies for blood donors, and you are not a donor, and you take some, you are stealing even if there is nowhere else for you to get food.
It presupposes that the State has the right to do whatever it wants on its territory as long as it lets people leave (even if the only other state in the world is even more authoritarian).
No, it does not imply that. They can’t do things suddenly, in particular (because then that particular aspect of the interaction hasn’t been consented to). Consent is also revocable at any time even if standing permission is granted. They also have to stick to contextual relevance in attempting to enforce laws. (Also, a government that was operating under Alicorn Morality couldn’t lie, which I think all by itself would shake some things up.)
For example, if half of the world were ruled by North Korea and the other half by China, would you say that China’s policies were morally justified because people would be free to leave and move to North Korea?
I am unqualified to have an opinion on the details of the political situations in most countries. If I just read this as “Bad Country With Guarantee of Exit” and “Worse Country With Guarantee of Exit”, well, that sounds like a pretty lousy situation to be in, but nothing about this situation means the countries involved have to “charge less” or require different standards of behavior from their citizens.
Imagine that the world is divided between Fascistia and Communistan. One day, the Duce of Fascistia announces that in a year, all the wealth of the residents of Fascistia will be confiscated to build statues of Mussolini, but before then, they’re perfectly free to take their stuff and move to Communistan. The General Secretary of the Communist Party of Communistan announces that he’ll happily accept all new immigrants, but warns that in a year, all the wealth of residents of Communistan will be confiscated to build statues of Lenin. In this case, the change is not sudden (if you consider this sudden, change “in a year” to “in ten years”) and it doesn’t prevent either country’s residents from leaving. Is this a rights violation?
Or consider another scenario. One day you’re checking your mail and find a letter from famed thief Arsene Lupin, informing you that in a year he will be breaking into your house to steal a recent painting you’ve acquired. M. Lupin happens to read LessWrong from time to time, so he’s read your writings on morality. He writes that you are free to leave your house and take your possessions with you, thwarting him. Nevertheless, you don’t leave. In doing so, have you consented to the painting being stolen?
Assuming the residents of Fascistia and Communistan have no wherewithal to create separate states (including by declaring subregions independent and declining to accept further services from the parent countries, thereby ending the transactional relationship; forming seasteads; flying to the Moon; etc.) it sure looks like they are in a pickle, unless they manage to use this year to become sculptor suppliers, or attempt to convince the leaders in question to change their minds. This is sort of like my version of the utility monster—sure, in real life, there are large numbers of diverse people and institutions you could choose to interact with, but what if your choices were Bad and Also Bad?! - and I have to bite the bullet here. (I do think it’s probably hard to construct a situation where nobody is, for whatever reason, capable of declaring independence, but if you cut off that route...)
I don’t consent to interact with M. Lupin or allow him into my house on any level. We are not in a transactional relationship of some kind that would imply this.
This is sort of like my version of the utility monster—sure, in real life, there are large numbers of diverse people and institutions you could choose to interact with, but what if your choices were Bad and Also Bad?! - and I have to bite the bullet here.
This seems a strange place to bite the bullet. Why can the state seize property (with ample warning) but M. Lupin can’t? The state is made of people, and if no person is permitted to seize it, then the state isn’t either. Alternatively, if the state is permitted to seize it, then some person must be as well, so it seems that people would then be allowed to make demands that entitle them to to your stuff.
I don’t consent to interact with M. Lupin or allow him into my house on any level. We are not in a transactional relationship of some kind that would imply this.
Why is this different from the state? Is it because it provides services? Would this be any different if M. Lupin broke into your house every day to do your laundry, without your consent, and then claimed that he had a right to the painting as payment for his services?
The services thing is key, but so is consent (of some kind, with guaranteed exit, etc etc caveat caveat). I don’t consent to M. Lupin coming into my house even to do my laundry, you can’t throw a book through somebody’s open window and demand ten dollars for it, if I make a batch of cookies I cannot charge my neighbors for the smell. If the people of Communistan declare independence of Provinceland, and Communistan officials commence visiting Provinceland without permission continuing to maintain the roads even if Provincelanders wish they would go away, then Communistan is conducting a (bizarre) invasion, not a consensual transaction.
How many people does it take to secede? Would it be permissible for California to secede from the US? What about the Bay Area—would it be morally permissible for it to be its own country? What about a small suburb? One house? Can I unilaterally secede, then claim that tax collectors/cops are invading my country of Blacktransylvania?
I don’t have a minimum number in mind, although you’d certainly need a fair number for this to be advisable. I will bemusedly support your solo efforts at secession if that is meaningful to you, provided that the land you’re trying to secede with belongs to you or someone amenable to the project.
Thank you for explaining your position. It’s surprisingly radical, if your last sentence is to be taken literally. I have one last question. Assume a few of my neighbors and I secede, and say that tax collectors are unwelcome. May we then amend our permission to say that tax collectors are welcome, but only if they’re collecting up to X amount of taxes, where X is the amount needed to fund [list of US government services we support], in return for receiving those services?
Assume a few of my neighbors and I secede, and say that tax collectors are unwelcome. May we then amend our permission to say that tax collectors are welcome, but only if they’re collecting up to X amount of taxes, where X is the amount needed to fund [list of US government services we support], in return for receiving those services?
I don’t see why not, but the United States is not obliged to offer the services a la carte.
What do you mean “comes from”? The rule in question fails to exist; it doesn’t have to come from anywhere, it just has to not be. Do you think that it does be?
Is photographing you qualitatively different from murdering you (which would presumably be immoral), and if so, why ?
Someone photographing you has a different intention from someone murdering you. (If the photographer believed that taking a picture of you would, say, steal your soul, then I would hold them responsible for this bad behavior even though they are factually mistaken.)
Come to think of it, why are all rights negative ?
I don’t think literally all rights are negative. Positive rights are generally acquired when someone makes you a promise, or brings you into existence on purpose. (I think children have a lot of claim on their parents barring unusual circumstances.). But nothing has happened to create a positive obligation between you and a random photographer.
In launching a legal challenge, what do you hope to accomplish in terms of “recompense” ? Are you seeking to extract a fine from the photographer, or perhaps to restrict his freedom in some way, or both ?
I don’t actually know any jurisdiction’s laws about publishing nonconsensual photographs. What I’d be looking for would probably depend on what I could reasonably expect to succeed at getting. This entire endeavor has left the moral sphere as long as I don’t violate the photographer or anyone else’s rights. My goal would probably be to discourage non-consensual photography of me and in general, as it’s kind of a dick move, and to compensate myself (perhaps with money, since it’s nicely fungible) for the unpleasantness of having been nonconsensually photographed. If I do not, in so doing, violate any rights, I can seek whatever is available, no problem.
Let’s say that you seek a fine, and you succeed. How is that different, morally speaking, from breaking into his house and stealing the money ? In one case you use a lockpick; in the other one you use a lawyer; but in the end you still deprive the photographer of some of his money. Why does one action count as an attack, and the other does not ?
This entire thing is actually complicated by the fact that I think political entities should guarantee an opportunity of exit—that if you can’t live with your society’s set of rules you should be shown out to any other place that will have you. Without that, there’s definitely some tension on my moral system where it interacts with the law. If we had proper guarantee of exit, the photographer being around me at all constitutes agreement to live by applicable shared rules, which in this hypothetical include not nonconsenusally photographing each other (I don’t know if that’s a real rule, but supposing it is) and also not breaking into each other’s houses and also producing fines when legally obliged to do so. In the absence of guarantee of exit it’s complicated and annoying. This also gets really stupid around intellectual property laws, actually, if you really want to put the squeeze on my system. I’m just gonna say here that any system will stop working as nicely when people aren’t cooperating with it.
I don’t think I’d characterize burglary as “attack”, but I already listed “stealing” separately in that shortlist of things.
What do you mean “comes from”? The rule in question fails to exist; it doesn’t have to come from anywhere, it just has to not be. Do you think that it does be?
I… am not sure what that paragraph means at all. In more detail, my question is twofold:
What are deontological rules in general, and rights in particular ? Are they, for example, laws of nature such as gravity or electromagnetism; are they heuristics (and if so, heuristics for what); or are they something else ?
How do we know which deontological rules we should follow in general; and which rights people have specifically ? For example, you mentioned earlier that people do not have a right to not be photographed. How do you know this ?
Positive rights are generally acquired when someone makes you a promise...
Once again, how do you know this ?
I don’t think I’d characterize burglary as “attack”, but I already listed “stealing” separately in that shortlist of things.
Fair enough; I was using “attack” in the general sense, meaning “an action whose purpose is to diminish an actor’s well-being in some way”.
That said, I’m not sure I understand your model of how the legal system interacts with morality. At one point, you said that the legal system is ethically neutral; I interpreted this to mean that you see the legal system as a tool, similar to a knife or a lockpick. Thus, when you said that you’d wield the legal system as a weapon against the photographer (or, more specifically, his money), I questioned the difference between doing that and wielding a lockpick to accomplish the same end. But now I’m beginning to suspect that my assumption was wrong, and that you see the legal system differently from a tool—is that right ?
I was using “attack” in the general sense, meaning “an action whose purpose is to diminish an actor’s well-being in some way”.
Depending on how you define “purpose”, burglary still might not qualify. The purpose of a burglary isn’t to harm its victims, it’s to acquire their valuables; harm is a side effect.
Good point; in this case, the fact that the victims lose said valuables is merely a side effect of how physical reality works.
Perhaps a better definition would be something like, “an action at least one of whose unavoidable and easily predictable effects includes the diminishing of another actor’s well-being”.
Rights are a characteristic of personhood. Personhood emerges out of general intelligence and maybe other factors that I don’t fully understand. Rights are that which it is wrong to violate; they are neither laws of physics nor heuristic approximations of anything. They are their own thing. I do think they are necessary-given-personhood.
Can you tell me where I lost you in the detailed description of what my process to determine that people don’t have that right was? I wrote down the whole thing as best I could.
Positive rights are generally acquired when someone makes you a promise...
Once again, how do you know this ?
Promises are the sort of thing that generates positive rights because that’s what “promise” means. If it doesn’t do that, it’s something other than a promise. (At least formally. You could have other definitions for the same word. The particular sense in which I use “promise” is this thing, though.)
At one point, you said that the legal system is ethically neutral
I think if I were you I’d be really careful with my paraphrasing. I’m not going to object to this one in particular, but it brought me up short.
you see the legal system differently from a tool—is that right ?
The legal system is many things; it definitely works as a tool for problems like collective action, coordination problems, deterrence of disrupting the social order, and more. I’m not sure what you’re reading into the word “tool” so I’m not sure whether I want to claim to see it exclusively as a tool or not.
I want to ask “why”, because I don’t fully understand this answer, but I fear that I must ask the more difficult question first: what do you mean by “personhood” ? I know it can be a tricky question, but I don’t think I’ll be able to figure out your position, otherwise. However, this next line gave me pause, as well:
Rights are that which it is wrong to violate; they are neither laws of physics nor heuristic approximations of anything.
Since I am not a deontologist (as far as I know, at least) I read this as saying: “rights are sets of rules that describe actions which any person (pending Alicorn’s definition of personhood) must avoid at all costs”. Is that what “wrong to violate” means ?
Can you tell me where I lost you in the detailed description of what my process to determine that people don’t have that right was?
I’m having trouble with the “process” part. From my perspective, whenever I ask you, “how do you know whether a person has the right X”, you either list a bunch of additional rights that would be violated if people didn’t have the right X; or derive right X from other rights, whose origin I don’t fully understand, either. Clearly I’m missing something, but I’m not sure what it is.
I do acknowledge that your system of rights makes a sort of sense; but the only way I know of interpreting this system is to look at it and ask, “will these rules, if implemented, result in a world that is better, or at least as good as, the world we live in now ?” That is, from my perspective, the rules are instrumental but not terminal values. As far as I understand, deontologists treat rights as terminal values—is that correct ?
I think if I were you I’d be really careful with my paraphrasing.
I did not want to make it sound like I’m putting words in your mouth. Whenever I say something like, “you, Alicorn, believe X”; I only mean something like, “to the best of my understanding, which may be incorrect or incomplete, Alicorn believes X, please correct me if this is not so”.
I’m not sure what you’re reading into the word “tool”
By “tool”, I mean something like, “a non-sapient entity which a sapient agent may use in order to more easily accomplish a limited set of tasks”. For example, a hammer is a tool for driving nails into wood (or other materials). The “grep” command is a tool for searching text files. The civil legal system could be seen as a tool for extracting damages from parties who wronged you in some way.
I believe (and you might disagree) that most tools (arguably, all tools, though weapons are a borderline case) are morally neutral. A hammer is neither good nor evil; it’s just a hammer. I can use it to build shelter for a homeless man, thus performing a good act; or I could use it to smash that man’s skull, thus performing an evil act; but it is the act (and possibly the person performing it) who is good or evil, not the hammer.
I don’t have a really thorough account of personhood. It includes but is not limited to paradigmatic adult humans.
Since I am not a deontologist (as far as I know, at least) I read this as saying: “rights are sets of rules that describe actions which any person (pending Alicorn’s definition of personhood) must avoid at all costs”. Is that what “wrong to violate” means ?
I definitely wouldn’t have chosen that phrasing, but it doesn’t seem obviously wrong?
I’m having trouble with the “process” part. From my perspective, whenever I ask you, “how do you know whether a person has the right X”, you either list a bunch of additional rights that would be violated if people didn’t have the right X; or derive right X from other rights, whose origin I don’t fully understand, either. Clearly I’m missing something, but I’m not sure what it is.
I’m not sure where you want me to ground this. Where do you ground your morality?
As far as I understand, deontologists treat rights as terminal values—is that correct ?
I wouldn’t choose the word “value”, but they definitely are non-instrumental in nature.
I will tentatively classify the legal system as a tool in this sense, albeit a tool for doing some very abstract things like “solve coordination problems”.
I don’t have a really thorough account of personhood. It includes but is not limited to paradigmatic adult humans.
So how do you know that rights “naturally fall out of” personhood, if you don’t really know what personhood even is ?
“rights are sets of rules that describe actions which any person must avoid at all costs”
I definitely wouldn’t have chosen that phrasing, but it doesn’t seem obviously wrong?
Ok, so in this case my problem is with the prescriptive nature of rights. What does “must avoid” mean in this case ? I personally can think of only three (well, maybe 2.5) reasons why an action must be executed or avoided:
The action will lead to some highly undesirable consequences. For example, jumping off of very high places must be avoided at all costs, because doing so will result in your death.
The preference or aversion to the action is hardwired into the person (via genetics, in case of humans). For example, most humans—even newborn ones—will instinctively attempt to stay away from ledges.
The action is part of the laws of nature that act upon all physical objects. For example, humans on Earth can’t help but fall down, should they find themselves in mid-air with no support. The same is true of rocks.
I’m not sure, but I don’t think any of these points adequately describe deontological rules. Point #1 is conditional: if your death becomes highly desirable, you may find jumping off a cliff to be a reasonable action to take. Points #2 and #3 are more descriptive than prescriptive. Regarding #2, yes we are wired to avoid ledges, but we are also wired to desire fatty foods, and in the modern world some of us must fight that compulsion every day or face highly undesirable consequences. Point #3, of course, is entirely descriptive; yes, objects fall down, but what you do with that knowledge is up to you.
Note also that there is a clear strategy for learning about reasons #1, 2, and 3: we look at the evidence and attempt to adjust our belief based on it. Again, I don’t understand how we can learn about deontological rules at all.
I’m not sure where you want me to ground this. Where do you ground your morality?
I have some sort of a utility function which is hardwired into my personality. Lacking perfect introspection, I can’t determine what it is exactly, but based on available evidence I’m reasonably sure that it includes things like “seek pleasure, avoid pain” and “increase the pleasure and reduce the pain of other people in your tribe”. Based on this, I can evaluate the fitness of each action and act (or choose not to act) to maximize fitness.
Obviously, in practice, I don’t apply this reasoning explicitly to every action; just like you don’t apply the full Bayesian reasoning machinery to every rustling noise that you hear from the bushes. It would take too long, and by the time you figure out P(tiger | rustling), you’d be tiger-food. Still, that’s merely an optimization strategy, which is reducible to the underlying reasoning.
So how do you know that rights “naturally fall out of” personhood, if you don’t really know what personhood even is ?
I’m starting to get concerned that you have some intractable requirements for completeness of a philosophical theory before one can say anything about it at all. Do you think your ethics would withstand a concerted hammering like this? Do you know how to compare utility between agents? What are your feelings on population ethics? How do you deal with logical uncertainty and Pascal’s muggings in complex Omega-related thought experiments? I’m not planning to make you solve these peripheral problems before allowing you to say that you endorse actions that have the best consequences over their alternatives (or whatever framing you prefer).
What does “must avoid” mean in this case ?
It means “if you don’t avoid it, you will be doing something wrong”. That’s all. Your guesses are wrong. Did you read Deontology for Consequentialists?
Do you think your ethics would withstand a concerted hammering like this?
I wasn’t trying to Gish Gallop you if that’s what you’re implying. That said, I think you are underestimating the inferential distance here. When you say, “rights naturally fall out of personhood”, I literally have no idea what that means. As you saw from my previous comments, I tried to stay away from defining personhood as long as possible, but I’m not sure I can continue to do that if your only answer to “what are rights” is something like “an integral part of personhood”.
It means “if you don’t avoid it, you will be doing something wrong”. That’s all.
Pretty much the only possible ways I can translate the word “wrong” are a). “will lead to highly undesirable consequences”, and b). “is physically impossible”. You ask,
Did you read Deontology for Consequentialists?
Yes I did, and I failed to fully understand it, as well. As I said before, I agree with most (or possibly all) of the rights you listed in your comments, as well as in your article; I just don’t understand what process you used to come up with those rights. For example, I agree with you that “killing people is wrong” is a good rule; what I don’t understand is why you think so, or why you think that “photographing people without permission is wrong” is not a good rule. Your article, as far as I can tell, does not address this.
Pretty much the only possible ways I can translate the word “wrong” are a). “will lead to highly undesirable consequences”, and b). “is physically impossible”.
Treat “acting in a way that violates a right” as “undesirable consequences”—that is, negative utility—and everything else as neutral or positive utility (but not positive enough to outweigh rights violations).
“Wrong” here is, essentially, “carrying negative utility”—not instrumentally, terminally.
Disclaimer: I am not a deontologist, and I’m certainly not Alicorn.
Pretty much the only possible ways I can translate the word “wrong” are a). “will lead to highly undesirable consequences”, and b). “is physically impossible”.
Well, I’m out of ideas for bridging the gap. Sorry.
These are good questions. It seems like deontologists have difficulty reconciling seeming conflicting rights.
In my main reply to the original post, I discuss some of the conflicts between truthfulness and privacy. If people have a right to not be lied to, and people also have privacy rights, then these rights could clash in some situations.
Not lying, not stealing, and not attacking people does pretty good for everyday.
Why these rights, and not others? For example, why a right to not be murdered, instead of a right to murder one person per year? A once-a-year right to murder can be formulated as a negative right, i.e. non-interference as you murder one person.
(I agree with the listed criteria for rights, BTW.)
Quantity limitations on rights are inelegant (what does one Earth year have to do with personhood-in-general?), so there’s that. Even if you frame it as “the right to go uninterfered-with during the course of one murder per year”, that has a heck of a lot of fiddly bits.
It also doesn’t interact with itself very well. Suppose you are trying to murder me, and I’m the first person you’ve tried to murder all year, and I haven’t murdered anyone all year either, so I try to murder you back—am I interfering with you? It sure looks like it, but I’m just exercising my own right… “Right not to be murdered” doesn’t do that sort of self-defeating.
I have other reasons to prefer the “right not to be murdered” version but they are failing to come verbally clear. Something about self-containedness that I’m having trouble explicating.
Consequentialist reasoning which seems to align fairly well with Alicorn’s conclusions (at least the one about it being in some situations correct to hide the truth by being selective even when this in some sense deceives the listener, and at the same time being less correct to directly lie) are touched on here if that’s useful to you.
Essentially: You don’t know for sure if a person wants general encouragement/niceties or a genuine critique. One way to deal with this is to say something nice+encouraging+true which leaves room for you to switch to “okay but here is what you could do better” mode without contradicting your previous nicety if and only if they communicate clearly they want your full opinion after hearing your careful wording.
I find the reaction to this comment, both in the downvotes and some of the responses, interesting in light of the recent discussion about Tell Culture. That post was highly upvoted, but some people in the comments expressed the opinion that even the people who claim to endorse Tell culture really don’t, and that people who actually consistently operated on Tell Culture would end up getting punished, even in a community where most people claimed to endorse Tell.
As far as I can tell, the reactions to this comment are support for that hypothesis, as I see you as a person who consistently operates on Tell, and then (as in this case) occasionally gets censured for that, even in a community where a lot of people previously claimed that Tell sounds awesome.
I think you have it backwards. Chris Told, Alicorn punished him for it, and the community retaliated. This is a great victory for Tell culture and radical honesty, as long as you don’t believe Alicorn embodies them.
A key difference is that the community is incrementalist and consequentialist, while Alicorn is absolutist and deontologist. A lot of the comments don’t believe that Alicorn accurately identifies liars. Expelling him is a step backwards from her claimed goal of honest associates. And, indeed, she did specify it was instrumental to this goal and not just a rule she follows without regards to consequences. But it’s probably also that. The community’s failure to grasp the deontological aspects may make its reaction unfair; but I cannot judge for the same reason. The basic reaction is that she is a very strong instance of Guess culture, where her associates have to guess how much to lie to her and are strongly discouraged from talking about it.
I don’t think that follows. The fact that we punish people for telling others about X, and we don’t punish them if we don’t, doesn’t mean we’re punishing them for telling; it means we’re punishing them for X. We’d really like to punish them for X whether they tell or not, it’s just that telling makes it easier.
It may be more understandable to think about it as cheating. You can either lose, or cheat and win. If you lose, you suffer all the effects of a loss. If you cheat, you may not suffer at all. But we don’t describe that as “punishment for not cheating”. It’s the same here: you can lose (have your opinions judged poorly) or cheat (conceal your opinions by not telling anyone, and escape being judged for them).
I find being generally known to be unwilling to lie highly useful in many situations. Less than a week ago I spontaneously volunteered a compliment to someone who politely thanked me, only to then double-take and remark that she thought that I wouldn’t have said it if I hadn’t meant it. Consequentialists who think that consequentialists should be able to solve the precommitment problem and be effectively honest nonetheless, in real life, cite my deontological prohibition on lying as a good reason to trust me. I am fairly good at omission, and have successfully avoided outing closeted people of my acquaintance who make that preference known to me, though I never felt the need to go through a similar period myself.
Arbitrary people are not obligated to trust me to handle the truth correctly. If for some reason I’m giving the impression that I’m the equivalent of a Nazi at the door or a homophobic parent, I see no reason from their perspective that they should confess to me these secrets even if I ask. This does not mean that we will be friends if I learn that this has been happening. There are plenty of things people might choose to do for reasonable or even unavoidable reasons that mean we will not be friends.
This post makes me less interested in inviting you over for dinner again. What has to happen in your head for you to be willing to come to my house and eat food I cook and participate in charming conversation and then blithely slash our tires if we ask the wrong question because you think we’re going to become hysterical or behave immorally should we gain access to information or be told that we cannot have it? Why does that sound like a welcoming environment you’d like to visit, with us on such a supposed hair trigger about mere true facts? Why should you sound like a guest I’d prefer when you say this? Whatever it is, I don’t like or want it closer to me. You may make that tradeoff, but imploring the people around you to “accept” others’ “right” to lie to them seems like a kind of fucked-up way to attempt to cheat the tradeoff.
There are some communities I consider incredibly welcoming where I don’t imagine by any means that anything I say will be received well just because it’s true. On the other hand, a subculture that not only has idiosyncratic social norms but aggressively shuns anyone who follows mainstream norms, likening violations of their idiosyncratic norms to slashing people’s tires… that sounds incredibly unwelcoming to me.
“Hair trigger about mere true facts” is hyperbole. But the truth is that the overwhelming majority of the human race consists of people who sometimes respond badly to being told “mere true facts.” Insisting you are an exception is quite a brag. It’s possible, but the prior is low. I’d give members of the LessWrong community better odds of being such an exception than I’d grant to most people, but I don’t think every member of the community, or even every prominent member of the community, qualifies. In some cases I think I’ve seen strong evidence to the contrary. (For reasons that should be obvious, please do not ask me to name names.) Because of this, I’m not going to default to treating most members of the LessWrong community radically differently than how I treat non-LessWrongers.
Not really my business, but a reaction like this may give people an incentive to lie to you.
I think that reaction is walking her talk. She could have changed her preference for inviting him over for dinner silently. Being truthful about her position is an example of being radically honest.
That doesn’t, however, make the response incorrect.
It depends on her reputation for being good at detecting when people lie to her.
If she has a reputation for being good at it and openly makes it known that she punishes people for lying to her, people will less likely lie to her. She only has a problem if people believe that she can’t effectively punish people for lying to her because she doesn’t spot the lies.
It doesn’t make sense to adopt a policy where a person sharing information about what it is like to interact with them must never affect how likely you are to interact with them. If someone tells me they’ve taken up smoking, they have contracted tuberculosis, they have decided that punching people in the arm is affectionate behavior, etc., then it’s kind of them to warn me and they could achieve short-term gains by deceiving me instead until I inevitably notice, but I will not reward the kindness of the warning with my company. The case of lying recurses here where the other examples don’t, but my goal is not, “make sure that people who have a tendency to lie don’t lie about having that tendency”. It’s “don’t hang out with people who are going to lie to me, like, at all”.
Good luck with that.
I think it’s a mistake to interpret “I will sometimes do (extreme thing)” as “my threshhold for doing (extreme thing) is low enough that I’d be likely to do it in everyday situations”.
If I visited your house, ate your food, and then you asked me “I want to kill my son by running him over with my car because he told me he’s gay. What’s the best way to do this without being caught by the police?”, depending on circumstances, I might slash your tires, or do things that cause as much damage to you as slashing your tires.
So if you asked me if I would slash your tires if you told me something bad, I’d have to say “yes”. But it doesn’t mean that if you invited me to your house you would have to watch what you say to me in fear that I might slash your tires, because the kinds of things that would lead me to do that would also imply that you’re seriously messed up. Nobody would just say those things by accident.
I see this fallacy a lot in rational idea discussions.,
It seems like this is an example of my new favorite conversational failure mode: trying to map an abstraction onto the reference class of your personal experience, getting a strange result, and getting upset instead of curious.
ChrisHallquist said there are some circumstances in which he feels compelled to lie. It seems like Alicorn assumed both that this must include some circumstances she’d be likely to subject him to, and that what he thinks of as a lie in that circumstance is something that will fall into the category she objects to. Of course, either of those things or both could be true—but the way to find out is to consider concrete examples (whether real or fictional).
Personally I used to make this mistake a lot when women complained (in vague abstract terms) about being approached by strangers in coffeeshops, and talk about how they’re not obligated to be polite or nice in those cases. Once I got curious and asked questions, and found out that “approached” meant a guy persistently tried to engage her in conversation with no affirmative encouragement from her, and “not polite” didn’t mean “fuck off and die, asshole” but just failing to throw a lot of warmth and smiling into the conversation, it made perfect sense, though I was surprised that it wasn’t already obvious to everyone that no such obligation exists.
I really really like this comment. I really want more clarification now. But from my perspective, someone who has a categorical rule against lying is like learning I’m being graded on everything I say. I suddenly have the massive cognitive burden of making sure everything I say is true and that I mean all the implications or I can suddenly be shunned and outcast.
I’m curious. Is telling the truth really a cognitive burden?
Walking is not a cognitive burden. Walking on a tightrope is. Being able to say whatever I feel like saying without having to analyze it constantly for punishment is the equivalent of simple walking. I may tell the truth in 90-99 percent of the statements I make, but when I get put into a context of punishment, suddenly I have to worry about the consequences of making what would otherwise be a very small step away from the straight and narrow.
Well, I feel like I’m walking on a tightrope much less when I’m allowed to be honest about everything than when I feel like there are things I’d be supposed to lie about.
My confusion increases. If you say whatever you feel like, you sometimes lie?
yes of course. Someone asks how I’m doing. I’m having a terrible day but say fine because I don’t want to talk about it. Is this example clear enough for you?
As noted elsewhere, that’s not really a lie, because “How are you?” isn’t actually a question, it’s more of a greeting protocol.
That statement only makes the web of lies/things that technically don’t count as lies I have to keep in my head to stay on Alicorn’s good side even more complicated.
I’m not that complicated and I’d rather you didn’t pin the entire intricacy of socialization on me personally. I’m okay with phatics like “fine”, but if you’re actually talking to me, specifically, I’ll also take “enh” or other non-information as a sign not to pursue the conversation as long as I’m reasonably on the ball and you can also tell me “I’d rather not talk about that”.
That’s good to know but I wouldn’t have guessed it from what you said in the post about slashing tires.
You’re aware I did not invent the tire slashing metaphor, right? You seem to be reacting very strongly and specifically to it. I linked a source the first time I used it here.
It seems more like the opposite to me. Telling the truth involves keeping track of what is going on in my head, but lying involves keeping track of what is going on in my head and keeping track of what appears to be going on in my head (and making sure they aren’t identical).
Saying whatever is in my head is easier than making up lies is easier than picking the phrasing of the truth that doesn’t offend or scare people.
Ah, okay. That sounds about right.
This has been my experience as well. Telling the truth requires just saying what’s on your mind, sometimes adjusting to avoid making people mad or to be better understood. Lying requires a lot of effort and is stressful.
This is often true, but often the opposite is true. If telling the truth requires extensive evaluation of actual facts, but lying just requires figuring out is the best thing to say, then lying can be less stressful.
As used here, “lying” means “intentional deception”, so if you say something, believing it to be true, but it’s actually false, it’s not lying. The contrast is not saying what’s true vs saying what’s false, but saying what you believe to be true vs saying what you believe to be false.
Depends on cognitive style.
Lying is saying something false while you know better. Not lying doesn’t imply only saying true things or knowing all implications.
The added burden should be minimal as between friends most people already assume that they are not lied to without making it an explicit rule.
I think there’s something missing there.
If someone were to put me in imminent fear for my life, I would feel justified in killing them. Now that you know that, would you be able to spend time with me without a massive cognitive burden of making sure that you don’t put me in imminent fear for my life?
And it’s not even like Chris is saying he’d kill anyone. He didn’t say “shunned and outcast”. He’d just lie to them. You consider being lied to such a horrifying prospect that you would devote massive cognitive resources to making sure it didn’t happen?
you’ve completely misread what I said
To be fair, the sentence he’s quoting is ungrammatical or at least weirdly phrased (“person is like learning”, I had to read that twice), and that may make it more confusing.
Fairness has nothing to do with whether someone is able to accurately read what someone else means.
When being faced with weirdly phrased writing in most cases the effective thing is to simply ignore the point or be open about the fact that you don’t understand what someone means and if you care about understanding it, ask for clarification.
It’s a figure of speech.
And confusion sometimes takes the unfortunate shape of someone thinking they understood and not realizing that they didn’t—they can’t ask to clarify then, can they? Since I believe that, purely as a matter of cause and effect, avoiding poorly formed sentences leads to this happening less often (even in cases when after the fact we would blame the reader more than the writer) I offered that remark as possibly helpful, that’s all.
Do you really believe that someone doesn’t already know that avoiding poorly formed sentences improves understanding of messages? If you don’t then why do you consider it worth saying?
Not really, but then again I’m not sure why you started arguing with me after I gave drethelin feedback on his poorly formed sentence, which he might have not been aware of. So I endevored to explain to you as clearly as I could why I did that. What are you trying to do here exactly?
You made a point about fairness and I argument that you are wrong to speak about fairness.
This happens in the context of a post by ThisSpaceAvailable. ThisSpaceAvailable lately wrote a post largely complaining that he isn’t treated fairly. In that context it’s worth noting that, local community standards are not about treating other people fairly but about promoting conversations that have utility.
Fairness is a very real concept in which some people believe. The fact that you use the word when you don’t want to talk about fairness is a mistake on your part worth pointing out.
Some day I really should get around to writing the post I’ve been thinking of for about a year.
Write the bad version now. Don’t worry about the good version until you have a complete bad one.
K. opens gedit
Just don’t lie to yourself.
You know, unfortunately I’m so much worse at not lying to myself than at not lying to others. (Then again, I’ve found a way to put this to a good use: if promising myself I won’t eat junk food from the vending machines doesn’t work, I promise that to my girlfriend instead. See also Beeminder. Yvain’s “fictional deities” approach also sounds interesting.)
Has his post offended you or something? You employ pretty strong language, and “this post makes me less interested in inviting you over for dinner again” is a kinda public way of breaking off a friendship, which (regardless of cause) is somewhat socially humiliating for the person on the receiving end. Is that really necessary? Settle such personal details via PM?
I don’t see it as a sort of grey fallacy argument to note that “lying” isn’t much of a binary property (i.e., either you lie, or you don’t). There may be simple enough definitions on the surface level, but when considering our various facets of personality, playing different roles to different people in different social settings, context-sensitivity and so on and so forth, insisting on anything remotely like being able to clearly (or at all) and reliably distinguish between “omitting a truth” and “explicitly lying” versus “telling the truth” loses its tenability. There are just too many confounders; nuances of framing, word choice, blurred lines between honesty and courtesy, the list goes on.
Yes, there are cases in which you can clearly think to yourself that “saying this or that would be a lie”, but I see those as fringe cases. Consider your in-laws asking you whether the soup is too salty. Or advertising. Or your boss asking you how you like your new office. Or telling a child about some natural phenomenon. The whole concept on Wittgenstein’s ladder (“lies to children”) would be simplistically denounced as “lying” in an absolute framework.
“Hair trigger about mere true facts” is disregarding all these shades of “lies” (disparity between internal beliefs and stated beliefs), there are few statements outside of stating mathematical facts for which a total, congruent correspondence between “what I actually believe” and “what I state to believe” can be asserted. Simply because it’s actually extremely hard to express a belief accurately.
Consider you were asked in a public setting whether you’ve ever fantasized about killing someone. Asked in an insistent manner. Dodge this!
Why is this is problem? I’m not Alicorn but I wouldn’t have any issues admitting in public that yes, I’ve fantasized about killing someone. And the situation is very easy to steer towards absurd/ridiculous if the asker starts to demand grisly details :-)
Well, “asked in an insistent manner” does seem to count as evidence that there’s some ulterior reasoning behind the question. Ordinarily I expect a lot of people (though maybe not most people) would be happy to admit that they’ve e.g. fantasized about running over Justin Bieber or whoever their least favorite pop star is with a tank, but I for one would be a lot more inclined to dodge the question or lie outright if my conversational partner seemed a little too interested in the answer.
If the conversational partner seems too interested, I’m likely to start inquiring about his/her fantasies… :-D
Heh. Dunno. Many of these other people (vaguely waves towards society) like to insist they wouldn’t. Not even while they’re in the bathroom, you know, producing rainbows. Makes it a good example.
If I’m interpreting your euphemism correctly: this fetish is not as common as you think it is.
The easiest way is to go meta. Ask the other person why they asked the question. If a person asks a question that’s inappropriate to ask in public you can put the burden to come up with a good answer on them.
It’s generally high status behavior not to directly answer question whether you engaged in bad activity X but punish the person who asks the question for asserting that you might be a person who engages in bad activity X but making them justify their bad faith in yourself.
It upset me. I don’t like to see lying defended. I would react about the same way to an equally cogent “Defense of Pickpocketing” or “Defense of Throwing Paint On People”, though I imagine those would be much more difficult to construct.
I think there should be negative social consequences to announcing one’s willingness to lie and that there should be significant backlash to issuing a public request that people put up with it.
I think you’re exaggerating the difficulty both of identifying lies and of omitting/deflecting.
“I think about killing my characters off pretty regularly, though often I come up with more creative things to do instead. As far as I know I’m an average amount of susceptible to intrusive thoughts, if that’s what you’re asking, but why are you asking?”
Or if I don’t even trust them with that answer I can just stare at them in silence.
Thanks for telling the truth. But downvoted for “I dislike this position, don’t want to hear it defended, and will punish those who defend it.” This is a much stronger rationalist anathema than white lies to me.
If you want to share arguments for socially unacceptable ideas you can wrap them into an abstract layer.
When you however call for people changing their action in a way that causes harm I see no reason why that shouldn’t be punished socially.
This is a forum for discussing ideas, it’s not a forum for playing social games. (I’m saying this as someone who is extremely reluctant about white lies and who hates the idea that they are socially expected to lie. Asking a question when one doesn’t want an honest answer is just silly.)
Except when you’re looking for the social / mental equivalent of a shibboleth.
Okay. Asking as question and then being offended and/or hurt when one gets an honest answer is just silly (alternatively, evil).
Except when acting offended and/or hurt signals solidarity and prompts your allies to attack the alien who got the shibboleth wrong. (You can argue that that’s evil, of course, but then you’re trying to break away from some very, very deeply ingrained instincts for coalition politics.)
I think that’s covered by “alternatively, evil”. ;) More seriously, though: how is “knowing what the preferred answer is and either agreeing with it or being willing to lie” a reasonable criterion by which to filter your group?
It proves that you value loyalty to your group more than you value your own capacity to reason, which means that authoritarian leaders don’t have to consider you a threat (and thus destroy you and everything you hold dear) if they order you to do something against your self-interest. Thus, perversely, when you’re in an environment where power has already concentrated, it can be in your self-interest to signal that you’re willing to disregard your self-interest, even to the point of disregarding your capacity to determine your self-interest.
Once ingrained, this pattern can continue even if those authoritarian leaders lose their capacity to destroy you—and perversely, the pattern itself can remain as the sole threat capable of destroying you if you dissent.
(Put a few layers of genteel classism over the authoritarian leadership, and it doesn’t even have to look autocratic in the first place.)
Definitely covered by “alternatively, evil”. Especially when considering a two-person relationship!
My problem with calling these behaviors “evil” is that they don’t have to be consciously decided upon—they’re just ways that happened to keep our ancestors alive in brutal political environments. Cognitive biases and natural political tendencies may be tragic, but calling them “evil” implies a level of culpability that I think isn’t really warranted.
The choice of words was a bit tongue-in-cheek, but enforcing your power over others in this way is definitely not a nice thing to do. And holding people responsible for such disingenuous behaviour only when they consciously deliberate and decide on it doesn’t seem to be very useful to me. People rarely consciously deliberate and decide upon being assholes. (And if someone does what you described in a two-person relationship, I am very inclined to call them an asshole, at least in my head.)
I wonder if people who have a disadvantaged native social circuitry are more likely to judge other people because their success in social situations requires more conscious deliberation and thus they’re expecting more of it from others.
I don’t know; I’m something of a counterexample to that, and I tend to not associate with other socially disadvantaged people, so I don’t have a good reference class to build examples from.
If you are just want to discus ideas, keep out words like I.
Don’t say: “But I will implore you to do one thing: accept other people’s right to lie to you.”
Say: “Here are reasons why you might profit from accept other people’s right to lie to you.”
Maybe even: “Here are reasons why a person might profit from accept other people’s right to lie to them”
You have a point there.
Who gets to decide what’s a social game? Attacking people when they’re perceived to be playing social games seems like a social game to me. It’s the nature of many social games that they employ plausible deniability, which leads to a lot of false positives and hostility if you attack all of the potential threats.
What if it doesn’t really cause that much harm? What if it does more good than harm? Then this sort of punishing behaviour entraps us in our mistake.
I think it’s worth distinguishing between punishing discourse in general and personal social consequences. Chris, the OP, has literally been physically in my house before and now I have learned that he endorses a personal social habit that I find repellent. I’m not trying to drive him out of Less Wrong because I don’t like his ideas—I didn’t even downvote the OP! - but it seems weird that you feel entitled to pass judgment on the criteria I have for who is welcome to be in my house.
Edit: separated these two quotes. LessWrong comment formatting stuck them together.
I don’t care whether you let him in your house. You’ve publicly shamed him, and you are saying that this kind of status-attack is the just response to a particular argument, regardless of how it’s presented. You also seem to be vilifying me and dodging my complaint by portraying my judgement as against your home-invitation policy, rather than against your public-backlash policy, which I resent as well.
“Vilifying you”? Because I didn’t understand the thrust of your criticism because you didn’t understand the point of my post? I’m tapping out, this is excessive escalation.
Sorry, that was uncharitable. Tapping out is a good idea.
(In the role of a hypothetical interlocutor)
“See this here?” (Pulls out his Asperger’s Club Card) “I have trouble distinguishing what’s socially acceptable to ask from what isn’t, and since you’re such a welcoming host, I hope you also welcome my honest curiosity. I wouldn’t want to lie—or suppress the truth—about which topic interests me right this moment.
As for the reason for my interest, you see, I’m checking whether your deontological barrier against lying can withstand the social inconvenience of (ironically) telling the truth about a phenomenon (fantasizing about killing someone) which is wildly common, but just as wildly lied about.
Your question answered, allow me to make sure I understood you correctly: My question was referring to actual people. Have I inferred correctly that you did in fact fantasize about killing living people (non-fictional) on multiple occasions?”
ETA:
I see. Unfortunately, unlike “pleading the fifth”, not answering when one answer is compromising is kinda giving the answer away. The symmetrical answering policy you’d have to employ in which you stare in silence regardless of whether the answer would be “yes” or “no” is somewhat hard to sell (especially knowing that silence in such a case is typically interpreted as an answer*). Unless you like to stare in silence, like, a lot. And are known to do so.
* “Do you love me?”—silence, also cf. Paul Watzlawick’s “You cannot not communicate.”
You or your character or both have confused “not lying” with “answering all questions put to one”. And for that matter “inviting people who ask rude questions indiscriminately to parties in the first place”.
I’d hoped I addressed this in the edit, “cannot not communicate” and such.
You may find yourself in situations (not at your parties, of course) in which you can’t sidestep a question, or in which attempts to sidestep a question (ETA: or doing the silent stare) will correctly be assumed to answer the original question by the astute observer (“Do you believe our relationship has a future?”—“Oh look, the weather!”).
Given your apparently strong taboo against lying, I was wondering how you’d deal with such a situation (other than fighting the hypothetical by saying “I won’t be in such a situation”).
Sorry, I didn’t see your edit before.
Questions I really can’t sidestep are usually ones from people who, for reasons, I have chosen to allow to become deeply entangled in my life. If one of my boyfriends or my fiancé decides to ask me if our relationship has a future I will tell him in considerable and thoughtful detail where I’m at on that topic, and because I choose to date reasonable human beings, this will not be an intolerable disaster. Occasionally if I’m really wedged (at a family holiday gathering, parent asks me something intrusive, won’t back off if I say it’s none of their business) I can solve the problem by deliberately picking a fight, which is usually sufficient distraction until I am not in their physical presence and can react by selectively ignoring lines in emails, but I don’t like doing that.
I don’t stare at people in silence a lot, but I do often give the visual appearance of wandering attention, and often fail to do audio processing such that I do not understand what people have said. Simply not completing the steps of refocusing my overt attention and asking people to repeat themselves can often serve the purpose when it’s not someone I have chosen to allow to become deeply entangled in my life; if we’re the only people in the room it works less well, but if I know a person well I’ll only be in a room alone with them if I trust them yea far, and if I don’t know them well and they start asking me weird questions I will stare at them incredulously even if the answer is in fact completely innocuous (“Have you ever committed grand theft auto?”; “are you a reptilian humanoid?”).
I think of such tactics as Aes Sedai mode :-)
I knew you were a deontologist (I am a cosequentialist), but I had sort of assumed implicitly that our moralities would line up pretty well in non-extreme situations. I realized after reading this how thoroughly alien your morality is to me. You would respond with outrage and hurt if you discovered that someone had written a defense of throwing paint on people? Or pickpocketing? Although I have never practiced either of those activities and do not plan to ever do so, my reaction is totally different.
Pickpocketing is a perfectly practical technique which, like lockpicking, might be used for unsavory purposes by shortsighted or malicious people, but is probably worth knowing how to do and makes a great party trick. And throwing paint on people? Hilarious. It’s not a terribly nice thing to do, especially if the person is wearing nice clothes or is emotionally fragile, but I think most people who can compose a cogent philosophical essay can also target their prankstering semi-competently.
Pickpocketing-as-theft is to lying-in-general as pickpocketing-as-consensual-performance-art is to, say, storytelling, I suppose I should clarify. I think we legitimately disagree about throwing paint on people unless you are being facetious.
In terms of pickpocketing, I agree that we seem to pretty much agree; I think that pickpocketing for the purposes of stealing what doesn’t belong to you is rarely justified. I was not being facetious about the paint part, though.
A more realistic example would be something like “In Defense of Taxation to Fund the Welfare State”—which would be different from “In Defense of Lying”, because even if I think that taxation to fund the welfare state is immoral, I don’t think that someone who holds the opposite position is likely to hold me at gunpoint and demand that I give money to a beggar, but if someone who thinks lying is okay to the degree that OP does, there is a real risk of them lying to me in personal life. More generally, advocating something bad in the abstract isn’t as bad as advocating something bad that I’m likely to experience personally.
You should try not paying your taxes on the grounds that you don’t want to support the welfare state. If you persist, I’m quite sure at some point men with guns will show up at your doorstep.
Yes, but my friend who is advocating for a welfare state will not be among them. I have nothing to fear from him.
Other than that he probably votes for people who pass laws telling you how much of your money will be taken “for the beggars” and who have no problems sending men with guns to enforce their commands.
He only has one vote out of the many necessary to send men with guns after me. Even if he changed his mind and voted against the welfare state, the probability that anything would change is minuscule. The expected harm from him voting for the welfare state is smaller than that of him sitting next to me after not showering for a couple of days.
But if the pool of voters were much smaller, I’d take a more negative view of his actions.
There’s still cash, right? Might have to change your line of work from bits to bricks too for that to work though.
There is, of course, cash, and the grey economy is not small. But it certainly has its limitations :-/
You lost me there so hard that I am wondering if we’re talking about the same thing—throwing paint at people doesn’t seem to happen in my corner of the world and I’ve never known anyone who got paint thrown at them, so maybe I’m misunderstanding something. So, to be sure, are we talking here about throwing paint, as in the stuff you paint walls with, at people, ruining their clothes, pissing them off, interrupting their day to get washed and changed and all? Is that what you find funny and defensible?
The issue is not so much about whether the practice itself is usually done in a defensible manner but that writing an article to play devils advocate to make the case of throwing paint at people isn’t an immoral act
Then I happen to be asking a separate question that isn’t about “the issue”. The paragraph I am responding to is talking about the practice of throwing paint, not about the practice of writing articles about it.
Nobody here defends the practice of throwing paint.
But if you wanted me to, then I would say that it’s preferable to throwing stones at other people. You still make your political point by throwing paint at policeman but you are causing less lasting damage. Convincing those people on the left who have a habit of throwing stones at policemen in political demonstrations to instead throw paint would cause less lasting injuries.
You have even higher returns in utility if you could convince a group like Hamas to throw paint instead of using nail bombs.
No?
Sounds to me like that means “throwing paint is extremely funny and pretty much OK”.
The point of the paragraph is to show that it’s possible to play devils advocate in this case. Also a bit about having fun playing devil’s advocate. Joking. Not long ago a fellow member on LW joked about committing bioterrorism. Distinguishing in what intent something is written is important.
Saying “It’s not a terribly nice thing” labels the action as a hostile action. That means you only do it if you actually want to engage in a hostile action against someone else. Given various choices of hostile actions it’s not clear that throwing paint is a bad choice.
That Vulture’s paragraph could be read that way has occurred to me, but it is far from obvious (you’ll note that my original post here is a request for confirmation that I am reading things correctly). I’ve met people with opinions like that before—not on throwing paint, because again, it’s something I’m unfamiliar with, but on other ways to be a jackass.
But it doesn’t matter. Even if you were correct about that, then if we’re discussing the possibility of Alicorn’s or anyone’s outraged/upset reaction to a defense of throwing paint, this only makes sense if this a defense possible to be taken seriously, to elicit a serious reaction. And not something as silly as “you should prefer it to throwing nail bombs”, which deserves only a shrug. So, either way, I felt compelled to assume Vulture was saying something I’m supposed to be able to follow without suspending all common sense.
I do think that Alicorn follows a policy of being offended when people to engage in serious efforts to play devils advocate for positions that she considers to be immoral.
Playing devils advocate for extreme immoral positions is something that some people can see as a game. If you go to the world debating championships than you might get a topic to argue that there should be more genocide. For debating folks making such an argument is a fun game of being intellectually detached from the position that one argues. There are other people who don’t think that there use in someone producing the best defense of genocide that’s possible to produce.
It’s possible to win debating tournaments where judges look at whether the participants make rational arguments while advocating positions that are very immoral. It doesn’t take suspending common sense to make an argument that not enough people throw paint at other people. It just takes intellectual detachment.
I think behaviorly I act almost exactly as you do in terms of trying never to lie but often to evade questions. But for some reason the comment I’m responding to rubs me incredibly negatively. I’m reflecting on why, and I think the difference is that you actually have it easy. You’re trying to live radically honestly in, if I’m not mistaken, the middle of an enclave that has far more of the sort of people that would appreciate Lesswrong in your immediate vicinity than most people do. So you can basically choose to be extremely choosy about your friends in this regard.
Try holding everyone around to the same standard you live by when most of your neighbors and colleagues are not associated with the rationalist movement at all, and let’s see how far you get. Let me tell ya, it’s a wee bit harder. For most of us, “be lenient with others and strict with thyself” is a pretty natural default.
I suspect, from Chris’ perspective, if his choices are “be invited to Alicorn’s parties” and “be friends with other people at all,” he may go with the latter.
I believed lying was wrong during times of my life when I didn’t live in a rationalist enclave, too. Curating your friends is easier when you are willing to maintain friendships online. Dinner parties are a luxury I am happy to avail myself of, that’s all.
I grew up in rural Oklahoma, in the “buckle of the Bible Belt”, where anti-intellectualism ran rampant. I was radically honest then (not in the literal sense of “radical honesty”, but in the sense of what Alicorn seems to be advocating), and it didn’t make me very popular, being an atheist, a consequentialist, a transhumanist, and increasingly a libertarian. It didn’t make me very popular—but lying would have been much, much worse. Telling the truth merely made those people dislike me, but lying would have made me compromise my integrity.
“Those who mind don’t matter, and those who matter don’t mind.”
Which totally misses the point of the comment you’re responding to. This isn’t about whether we are radically honest. It’s about whether we insist on everyone we associate with also being radically honest as a condition of our association with them.
That’s a good point. I personally require people I associated with to be honest (except when their lives or livelihoods are at stake), as I hate being lied to. How people respond to this is up to them.
My instant urge when you compared polite lies to slashing your tires is to insult you at length. I don’t think this would be pleasant for anyone involved. Radical Honesty is bad for brains running on human substrate.
I do not and have never endorsed indiscriminate braindumping.
I advocate refraining from taking actions that qualify as “lying”. Lying does not include, among other things: following Gricean conversational maxims, storytelling, sarcasm, mutually-understood simplification, omission, being choosy about conversational topics, and keeping your mouth shut for any reason as an alternative to any utterance.
There is no case where merely refraining from lying would oblige you to insult me at length. I don’t know why everyone is reading me as requiring indiscriminate braindumping.
An emotional response to your statement is not indiscriminate braindumping. I’m not talking about always saying whatever happens to be in my mind at any time. Since I’ve probably already compromised any chance of going to a rationalist dinner party by being in favor of polite lies, I might as well elaborate: I think your policy is insanely idealistic. I think less of you for having it. But I don’t think enough less of you not to want to be around you and I think it’s very likely plenty of people you hang out with lie all the time in the style of the top level post and just don’t talk to you about it. We know that humans are moist robots and react to stimuli. We know the placebo effect exists. We know people can fake confidence and smiles and turn them real. But consequentialist arguments in favor of untruths don’t work on a deontologist. I guess mostly I’m irate at the idea that social circles I want to move in can or should be policed by your absurdity.
I don’t think the above constitutes an indiscriminate braindump but I don’t think it would be good to say to anyone face to face and I don’t actually feel confident it’s good to say online.
This is a summary reasonably close to my opinion.
In particular, outright denouncement of ordinary social norms of the sort used by (and wired into) most flesh people, and endorsement of an alternative system involving much more mental exhaustion for the likes of people like me, feels so much like defecting that I would avoid interacting with any person signalling such opinions.
Incidentally (well after this thread has sort of petered out) I feel the same sort of skepticism or perhaps unenthusiasm about Tell Culture. My summarized thought which applied to both that and this would be, “Yes, neat idea for a science fiction story, but that’s not how humans work.”
Upvoted for the entire comment, but especially this.
And this.
Depending on the context, lies of omission can be as bad as, if not worse than, blatant lies (due to being all the more convincing).
Imagine that I ask you, “did you kill your neighbour ?”, and you answer “no”. The next week, it is discovered that you hired a hitman to kill your neighbour for you. Technically, you didn’t lie… except by omission.
Personally, I’d categorize putting a hit on somebody as killing them, but if you really, sincerely didn’t think of the words as meaning that, and I asked you that question, and you told me ‘no’, then I wouldn’t add lying to your list of crimes (but you’d already be behaving pretty badly).
The thing I’m measuring here is not, actually, the distance traveled in the audience towards or away from omniscience. It’s something else.
Something perplexes me about the view you describe, and it’s this:
What is the point?
That is to say: You say lying is bad. You describe a certain, specifically circumscribed, view of what does and does not count as lying. The set of conditions and properties that define lying (which is bad) vs. things that don’t count as lies, in your view, are not obvious to others (as evidenced by this thread and other similar ones), though of course it does seem that you yourself have a clear idea of what counts as what.
So my question is: what is the point of defining this specific set of things as “lying, which is bad”? Or, to put it another way: what’s the unifying principle? What is the rule that generated this distribution? What’s the underlying function?
Ok, that’s fair; so what would be an example of an omission that, in your model, does not count as a lie and is therefore acceptable ?
What kind of scope of omission are you looking for here? If someone asks “what are you up to today?” or “what do you think of my painting?” I can pick any random thing that I really did do today or any thing I really do think of their painting and say that. “Wrote a section of a book” rather than a complete list, “I like the color palette on the background” rather than ”...and I hate everything else about it”.
Also, not speaking never counts as lying. (Stopping mid-utterance might, depending on the utterance, again with a caveat for sincere mistake of some kind. No tricks with “mental reservation”.)
Ok, that makes sense. But still, from my perspective, it still sounds like you’re lying; at least, in the second example.
I don’t see the any difference between saying, “I think your painting is great !”; and saying something you honestly expect your interlocutor to interpret in the same way, whereas the literal meaning of the words is quite different. In fact, I’d argue that the second option involves twice the lies.
What, never ? Never is a long time, you know. What if your friend asks you, “let me know if any of these paintings suck”, and you say nothing, knowing that all of them pretty much suck ?
I would understand it if your policy was something like, “white lies are ok as long as refusing to engage in the would cause more harm in the long run”; but, as far as I can tell, your policy is “white lies are always (plus or minus epsilon) bad”, so I’m not sure how you can reconcile it with the above.
If your friend asks you to serve as a painting-reviewer and you say you will and then you don’t, that’s probably breach of promise. If your friend asks you to do them this service and you stare blankly at them and never do it, you’re probably being kind of a jerk (it’d be nicer to say “I’m not gonna do that” or something) but you are not lying.
I understand your point, but I still do not understand the motivation behind it. Are you following some sort of a consequentialist morality, or a deontological one that states “overt lies are bad, lies of omission are fine”, or something else ?
As I see it, if a friend asks you “do you like this painting ?” and you reply with “the background color is nice”, the top most likely outcomes are:
The friend interprets your response as saying, “yes I like the painting”, as was your intent. In this case, you may not have lied overtly, but you deceived your friend exactly as much.
The friend interprets your response as saying, “no, I didn’t like the painting but I’m too polite to say so”. In this case, you haven’t exactly lied, but you communicated the same thing to your friend as you would’ve done with a plain “no”.
The friend interprets your response as in (1), with an added ”...and also I don’t think you’re smart enough to figure out what I really think”. This is worse than (1).
Similarly, if your friend asks you to review his paintings and you refuse, you’d better have a good reason for refusal (i.e., the truth or some white lie); otherwise, anyone of average intelligence will interpret your response as saying “I hate your paintings but I won’t tell you about it”.
None of what I wrote above matters if you only care about following prescribed rules, as opposed to caring about the effects your actions have on people. Perhaps this is the case ? If so, what are the rules, and how did you come by them ?
I’m Less Wrong’s token deontologist. I thought most people around here knew that. I wrote this article about it and my personal brand of deontology is detailed in this comment.
Sorry, I did not, in fact, know that; and most people here are consequentialists, so I assumed you were one as well. I’d skimmed your post on deontology that you linked to earlier, but I did not understand that it was meant to represent your actual position (as opposed to merely being educational).
As I said above, if your moral system simply has a rule that states “lying is bad except by omission”, or something similar, then none of my points are valid, so you are right and I was wrong, my apologies.
That said, personally, I don’t think that deontology makes any sense except possibly as a set of heuristics for some other moral system. That’s a different line of debate however, and I won’t push it on you (unless you are actually interested in pursuing it).
I’m willing to answer questions about it if you’re curious, but since I dropped out of grad school I haven’t devoted much time to refining either my ethical theory or my ability to explain it so the old article will probably be just about as good. I get enough debating in just from hanging out with consequentialists all the time :P
To expand on what blacktrance said:
As I understand it, deontological systems are, at the core, based on lists of immutable rules.
Where do the rules come from ? For example, one rule that comes up pretty often is something like, “people have inalienable rights, especially the right to A, B and C”. How do you know that people have rights; what makes those rights inalienable; and what makes you so sure that A, B and C are on the list, whereas X, Y and Z are not ?
I think that rights drop naturally out of personhood. Being a person is to be the kind of thing that has rights (and the obligation to respect same). The rights are slightly alienable via forfeiture or waiver, though.
I don’t quite understand what you mean. Even if we can agree on what “personhood” means (and I’ve argued extensively with people on the topic, so it’s possible that we won’t agree), what does it mean for a right to “drop out naturally” out personhood ? I don’t understand this process at all, nor do I understand the epistemology—how do you determine exactly which rights “drop out naturally”, and which ones do not ?
To use a trivial example, most deontologists would probably agree that something like “the right to not be arbitrarily killed by another person” should be on the list of rights that each person has. Most deontologists would probably also agree that something like “the right to possess three violet-blue glass marbles, each exactly 1cm in diameter” should not be on the list. But why ?
I think Alicorn’s answer concerned the ontological status of rights, not the epistemology thereof.
Understood, but I would like to understand both...
Likewise. For what it’s worth, though, I don’t actually think there is a good answer to the epistemological questions you asked; that’s one of the reasons I favor consequentialism rather than deontology. Of course, I imagine Alicorn’s views on the matter differ, so I, too, would like to see her answer (or that of any other deontologist who cares to respond).
As I mentioned here consequentialism has the same epistemological problem.
In another branch of this thread I’ve just walked through an assessment of whether a provided example contained a rights violation. Does that help?
And consequentialist systems are, at the core, based on an immutable utility function.
Where does this function come from?
Well, no. Utilitarian systems are based on a utility function (although I’m not aware of any requirement that it be immutable… actually, what do you mean by “immutable”, exactly?). Consequentialist systems don’t have to be utilitarian.
Even so, the origin of a utility function is not that mysterious. If your preferences adhere to the von Neumann-Morgenstern axioms, then you can construct a utility function (up to positive affine transformation, as I understand it) from your preferences. In general, the idea is that we have some existing values or preferences, and we somehow assign utility values to things (“things”: events? world states? outcomes? something) by deriving them from our existing preferences/values. It’s not a trivial process, by any means, but ultimately the source here is the contents of our own brains.
The problem is that most (all?) people’s preferences don’t.
That’s a valid question, and, admittedly, there’s no good answer that I’m aware of. One might say that, ultimately, the function can be derived from some basic principle like “seek pleasure, avoid pain”, but there’s no objective reason why anyone should follow that principle, as opposed to, say, “seek paperclips, avoid non-paperclips”.
I will grant you that both consequentialism and deontology are based on some a priori assumptions; however, I would argue that the fact that consequentialism is based on fewer such assumptions, as well as its flexibility in the face of new evidence, make consequentialism a more efficient moral system—given that we humans are agents who are reasoning under uncertainty using a comparatively limited amount of data.
I would argue that this “fact” is not in fact true, or at least not obvious. It’s not even clear to me what the content of that claim is supposed to be. If you mean that it takes fewer bits to encode a utility function then a collection of maxims, then this will obviously depend on which utility function or set of maxims is used, also as Eliezer points out here this is a really really bad way to compare moral systems.
Huh? If you’re claim is that consequentialism is more flexible in the face of new evidence then deontology, you’re going to have to provide some justification for it (as well as specifying precisely what you mean). As I see it, both are inflexible in the sense that ideal agents of either type are incapable of changing their utility function or set of maxims in the face of any evidence, and flexible in the sense that they can use evidence to determine how to pursue their maxims or maximize their utility function, and also in the sense that actual humans will in fact update their maxims or utility function in the face of evidence.
Not necessarily. You are correct in saying that any given arbitrary utility function can be a lot more complex than any given arbitrary set of rules; so strictly speaking I was wrong. However, in practice, we are not dealing with arbitrary functions or rules; we are dealing with limited subsets of functions/rules which are capable of sustaining a human society similar to ours in at least some way. Of course, other functions and other rules can exist, but IMO a moral system that effectively commands its followers to e.g. kill themselves ASAP is not very interesting.
Given this restriction, I believe that consequentialist moral systems which satisfy it will require fewer arbitrary assumptions, in part due to the following:
Changing the maxims is exactly the problem. Given that deontological maxims are essentially arbitrary; and given that the space of all possible human behaviors is quite large; it is already pretty difficult to construct a set of maxims that will account for all relevant behaviors that are currently possible. Of course, you could always create a maxim that amounts to saying, “maximize this specific utility function”, but then you’re just reducing deontology to consequentialism.
In addition, though, as humans acquire more knowledge of and more power over their environment, the set of possible behaviors keeps changing (usually, by increasing in size). This presents a problem for the deontologist, who has to invent new maxims just to keep up (as well as convincing others to use the new maxims which, as you recall, are entirely arbitrary), as well as to possibly revise existing maxims (ditto). The consequentialist, on the other hand, can apply his existing utility function to the new behaviors, or plug the new data into it, in order to come up with a reasonable re-evaluation of the morality (or lack thereof) of each behavior.
To use a trivial example, at some point in human history, it became possible to digitally copy musical performances without paying any money to the original authors. The deontologists are debating to this very day whether such actions count as “theft” or not, “theft” being a prohibited behavior under one specific maxim. Unfortunately, this new behavior doesn’t quite fit the parameters of the original maxim (which was invented before information technology became widespread), hence the debates. But if we dispense with the labels, and attempt to evaluate whether digital music copying ultimately causes more harm than good (or vice versa), then we can at least make some progress.
Upvoted for this, and the excellent (if trivial) digital copying example.
I will add that progress in such cases may also sometimes be made by attempting to discern just what are the origins of our moral intuitions about the wrongness of theft, seeing if those intuitions may be decomposed, and whether they may be reconstructed to yield some concepts that are appropriate to the digital realm. (I’ve got an essay where I attempt to do just that for software piracy, which I may post online at some point...)
The general principle here is that since the basis of our consequentialism systems is the contents of our brains, we can refer to the source material for guidance (or attempt to, anyway). With deontology, since it doesn’t reduce to anything, that move is not open to us. (I think. I remain unclear about where the rules in a deontological system come from.)
Utility functions have the same problem. See blow for more details.
Huh? This doesn’t resemble the behavior of any consequentialist I have ever encountered. In practice when presented with new possibilities, consequentialists wind up doing logical back flips to avoid having to do things, such as torturing children to cure malaria, that they find deontologically repugnant.
Yes, of course. I have already said that a deontological system with a single rule that says, “maximize utility function F” would be equivalent to consequentialism, and thus they would share the same problems. However, in practice deontological systems tend to have many more immutable rules than that, and thus they are more susceptible to said problems, as per my previous post.
That sounds like you’re saying, “no one I know is actually a consequentialist, they are all crypto-deontologists in reality”, which may be true but is not relevant.
In addition, you may disagree with the decision to torture children to cure malaria; and that action may in fact be objectively wrong; but nowhere did I say that real consequentialists will always make correct decisions. By analogy, GPS navigation systems don’t give us perfect answers every time, but that doesn’t mean that the very concept of GPS navigation is invalid.
What problems would those be? The only problems you mentioned in your previous post are:
and
When I pointed out that consequentialists have the same problems with changing their utility functions, you declared it “true but not relevant”.
This analogy isn’t accurate. I’m not saying looking at consequences/GPS navigation is invalid. You’re the one whose saying all non-GPS navigation is invalid/look only at consequences.
Wait, what? What Bugmaster described sounds like the behavior of most of the consequentialists I’ve encountered.
Also, I don’t see what the linked situation (i.e. torture vs. malaria) has actually to do with the current issue. The issue Bugmaster raises is that of new behaviors that don’t precisely resemble any existing behaviors. How does the malaria-children-torture case fit that category?
When presented with a new potential behavior, in this case torturing children to cure malaria, that provides an actual consquentialist reason for doing something deontologically repugnant, he winds up doing logical back flips.
The issue is that the consequentialist has a secret set of deontological maxims, and he chose his utility function to avoid being forced to violate them; he thus has problems when it turns out he does have to violate them to maximize the utility function. His first reaction to this is frequently to deny that the repugnant action would in fact maximize his utility function, sometimes even resorting to anti-epistomology in order to do so. If that fails he will change his utility function, do this enough and the utility function starts to resemble a count of the number of maxim violations.
Edit: Of course, the other possibility is that the consequentialist decides that the repugnant action isn’t so repugnant after all and commences torturing children.
First of all, I must ask that you stop equating utilitarianism with consequentialism.
Second of all, torturing children is not a new behavior, in the way Bugmaster was using the phrase. A new behavior is something that wasn’t available before, wasn’t possible, like “copying digital media”. You couldn’t copy digital media in the year 1699 no matter what your moral beliefs were. You could, on the other hand, torture children all you liked.
Where am I doing that? I don’t think the word “utiliterian” was even used in this discussion previously, I tend to avoid using it since it has several similar but different definitions and thus tends to cause confusion in discussions.
True, but torturing children to cure malaria is. Another example that may make things clearer is wire-heading, which causes problems for a utility function that hasn’t sufficiently specified what it means by “pleasure” just as “copying digital media” can cause problems for maxims that haven’t specified what they mean by “theft”.
My entire point is that you are ascribing things to consequentialism that are true of utilitarianism, but are not true of consequentialism-in-general.
Ok, I was occasionally talking about Von Neumann–Morgenstern consequentialism since that’s what most consequentialists around here are. If you mean something else by “consequentialism”, please define it. We may have a failure to communicate here.
One may be a consequentialist without adhering to the von Neumann-Morgenstern axioms. “Consequentialism” is a fairly general term; all it means is “evaluates normative properties of things[1] on the basis of consequences” (”… rather than other things, such as the properties of the thing itself, that are not related to consequences”).
The SEP article on consequentialism is, as usual, a good intro/summary. To give a flavor of what other kinds of consequentialism one may have, here, to a first approximation, is my take on the list of claims in the “Classic Utilitarianism” section of the article:
Consequentialism: yes.
Actual Consequentialism: no.
Direct Consequentialism: no.
Evaluative Consequentialism: yes, provisionally.
Hedonism: no.
Maximizing Consequentialism: intuition says no, because it seems to exclude the notion of supererogatory acts.
Aggregative Consequentialism: intuition says yes, but this is problematic (Bostrom 2011) [2], so perhaps not.
Total Consequentialism: probably not (though average is wrong too; then again, without the aggregative property, I don’t think this problem even arises).
Universal Consequentialism: intuition says no, but I have a feeling that this is problematic; then again, a “yes” answer to this, while clearly more consistent, fails to capture some very strong moral intuitions.
Equal Consideration: see the universal property; same comment.
Agent-neutrality: seems like obviously yes but this is one I admit I know little about the implications of.
As you can see, I reject quite a few of the claims that one must assent to in order to be a classic utilitarian (and a couple which are required for VNM-compliance), but I remain a consequentialist.
[1] Usually “things” = acts, “properties” = moral rightness.
[2] Infinite Ethics
Should I take that to mean that only on the basis of consequences, or on the basis of consequences and other things?
Only, yes.
Edit: Although one of the interesting conclusions of Bostrom’s aforementioned paper is that bounding aggregative consequentialism with deontology gives better[1] results than just applying consequentialism. (Which I take to cast doubt on the aggregative property, among other things, but it’s something to think about.)
[1] “Better” = “in closer accord with our intuitions”… sort of. More or less.
Ok, in that case most of my criticism of consequentialism still applies, just replace “utility function” with whatever procedure general consequentialists use to compute moral actions.
No, I really don’t think that it does.
Consequentialists get their “whatever procedure” from looking at human moral intuitions and shoring them up with logic — making them more consistent (with each other, and with themselves given edge cases and large numbers and so forth), etc., while hewing as close to the original intuitions.
It’s a naturalistic process. It’s certainly not arbitrarily pulled from nowhere. The fact is that we, humans, have certain moral intuitions. Those intuitions may be “arbitrary” in some abstract sense, but they certainly do exist, as actual, measurable facts about the world (since our brains are part of the world, and our brains are where those intuitions live).
I mean, I’m not saying anything new here. Eliezer had a whole sequence about more or less this topic. Robin Hanson wrote a paper on it (maybe multiple papers, but I recall one off the top of my head).
Now, you could ask: well, why look to our moral intuitions for a source of morality? And the answer is: because they’re all we have. Because they are what we use (the only thing we could use) to judge anything else that we select as the source of morality. Again, this stuff is all in the Sequences.
Really, to me it looks more like they take one moral intuition extrapolate it way beyond it’s context and disregard the rest.
We also have a lot of deontological moral intuitions and even more virtue ethical moral intuitions.
If you mean the meta-ethics sequence, it’s an argument for why we base our morality on intuitions (and even then I don’t think that’s an entirely accurate summary), it’s argument for pure consequentialism is a lot weaker and relies entirely on the VNM theorem. Since you’ve claimed not to be a VNM consequentialist, I don’t see how that sequence helps you. Also you do realize there are bookshelves full of philosophers who’ve reached different conclusions?
Would you apply the same logic to claim that our physical intuitions as our only source of physics? Or to use an even more obvious parallel, that our mathematical intuitions are our only source of mathematics. In a sense these statements are indeed true, but it is certainly misleading to phrase it that way.
Also, if you say moral intuition is our only source of morality, if people’s moral intuitions differ, are they obligated to obey their personal moral intuition If so, does that mean it’s moral for me to murder if my intuition says so? If not, whose intuition should we use?
Which moral intuition is that...?
Yes, I studied some of them in college. My assessment of academic philosophers is that most of them are talking nonsense most of the time. There are exceptions, of course. If you want to talk about the positions of any particular philosopher(s), we can do that (although perhaps for that it might be worthwhile to start a new Discussion thread, or something). But just the fact that many philosophers think some particular thing isn’t strong evidence of anything interesting or convincing.
Um, what logic? For physics and mathematics the claim that “our X-ical intuitions are our only source of X” is simply false: for physics we can do experiments and observe the real world, whereas mathematics… well, there’s more than one way to view it, but if you take mathematics to consist merely of formal systems, then those systems have no “source” as such. Insofar as any of those formal systems describe any aspect of reality, we can look at reality and see that.
For morality there just isn’t anything else, beyond our intuitions.
Moral laws don’t exist anywhere outside of human brains, so in one sense this entire line of questioning is meaningless. It’s not like moral laws can actually compel you to do one thing or another, regardless of whether you are a consequentialist or a deontologist or what. Moral laws have force insofar as they are convincing to any humans who have the power to enforce them, whether this be humans deciding to follow a moral law in their own lives, or deciding to impose a moral law on others, etc.
If people’s moral intuitions differ then I guess those people will have to find some way to resolve that difference. (Or maybe not? In some cases they can simply agree to go their separate ways. But I suppose you’d say, and I’d agree, that those are not the interesting cases, and that we’re discussing those cases where the disagreement on morality causes conflict.)
I mean, I can tell you what tends to happen in practice when people disagree on morality. I can tell you what I in particular will do in any given case. But asking what people should do in cases of moral disagreement is just passing the buck.
I hope you’re not suggesting that deontology, or any other system, has some resolution to all of this? It doesn’t seem like you are, though; I get the sense that you are merely objecting to the suggestion that consequentialism has the answers, where deontology does not. If so, then I grant that it does not. However, these are not the questions on which basis I judge deontology to be inferior.
Rather, my point was that even if we grant that there are, or should be, absolute, unbreakable moral laws that judge actions, regardless of consequences (i.e. accept the basic premise of deontology), it’s entirely unclear what those laws should be, or where they come from, or how we should figure out what they are, or why these laws and not some others, etc. Consequentialism doesn’t have this problem. Furthermore, because moral intuitions are the only means by which we can judge moral systems, the question of whether a moral system satisfies our moral intuitions is relevant to whether we accept it. Deontology, imo, fails in this regard to a much greater degree than does consequentialism.
Because our physical intuitions tell us that should work.
Then why are we focusing on those particular formal systems? Also where do our ideas about how formal systems should work come from?
Well, look at the game theory based decision theories, notice that they seem to be converging on something resembling Kantian deontology. Also, what do you hope that, don’t you want the issue resolved?
I’m not really sure what you mean by this.
Why indeed? Mathematics does sometimes examine formal systems that have no direct tie to anything in the physical world, because they are mathematically interesting. Sometimes those systems turn out to be real-world-useful.
What do you mean, “how formal systems should work”? Formal systems are defined in a certain way. Therefore, that is how they work. Why do we care? Well, because that’s an approach that allows us to discover/invent new math, and apply that math to solve problems.
Really? Kantian deontology, and definitely not rule consequentialism?
I meant, by that, that such a claim would be clearly false. If you were claiming clearly false things then that would make this conversation less interesting. ;)
Where does your belief that observing the world will lead us to true beliefs come from?
First, where do those definitions come from? Second, as Lewis Carrol showed a definition of a formal system is not the same as a formal system since definitions of a formal system don’t have the power to force you to draw conclusions from premises.
Yes, you may want to look into decision theories many of which take superrationality as their staring point. Or do you mean taking the Categorical Imperative as a rule consequentialist rule?
Careful, just because you can’t think of a way to resolve a philosophical problem, doesn’t mean there is to way to resolve it.
http://yudkowsky.net/rational/the-simple-truth
… and many posts in the Sequences. (The posts/essays themselves aren’t an answer to “where does this belief come from”, but their content is.)
We made ’em up.
http://lesswrong.com/lw/rs/created_already_in_motion/
I am passingly familiar with these systems. I don’t know why you would claim that they have anything to do with deontology, since the entire motivation for accepting superrationality is “it leads to better consequences”. If you follow unbreakable rules because doing so leads to better outcomes, then you are a consequentialist.
Um, ok, fair enough, so in that case how about we stop dancing around the issue, and I will just ask straight out:
Do you believe that deontology has a resolution to the aforementioned issues? Or no?
That article ultimately comes down to relying on our (evolved) intuition, which is exactly my point.
Once you self-modify to always follow those rules, you are no longer a consequentialist.
Quiet possibly.
Upvoted for spotting something probably non-obvious: the parallel between Kantian ethics and certain decision theories seems quite interesting and never occurred to me. It’s probably worth exploring how deep it runs, perhaps the idea that being a rational agent in itself compels you inescapably to follow rules of a certain form might have some sort of reflection in these decision theories.
I certainly would hope that there doesn’t turn out to be a universal cosmic moral law derivable from nothing but logic, if it happens to be a law I really hate like “you must kill kittens”. :)
Also:
This is true. Personally, I think that to the extent that those intuitions ought to be satisfied, they are compatible with consequentialism. This isn’t 100% true, but it’s fairly close, it seems to me.
Except you defined consequentialism as only caring about consequences.
Yes. What contradiction do you see...?
Those intuitions involve caring about things besides consequences. One way to deal with this is to say that those intuitions shouldn’t be satisfied, but you are left with the question of on what basis you are making that claim. The other way I’ve seen people deal with it is to expend the definition of “consequences” until the term is so broad as to be meaningless.
I agree that the latter maneuver is a poor way to go. The former does make the resulting morality rather unsatisfactory.
My view —
— is another way of saying that some intuitions that seem deontological or virtue-ethical are in fact consequentialist. Others are not consequentialist, but don’t get in the way of consequentialism, or satisfying them leads to good consequences even if the intuitions themselves are entirely non-consequentialist. The remainder generally shouldn’t be satisfied, a decision that we reach in the same way that we resolve any conflict between our moral intuitions:
Very carefully.
For example, do you think creating a person and then killing him is morally equivalent to not creating him in the first place because the consequences are the same?
Those are two different consequences.
What do you mean? If I dispose of the body well enough I can make the final outcome atom-for-atom identical.
Can you expand on what you mean by “final outcome” here, and why it matters?
For my part, I would say that the difference between the world in which a person lives N years and then dies and all the effects of that person’s actions during those N years are somehow undone, and the world in which they didn’t live at all, is the N years of that person’s life.
What you seem to want to say is that those N years aren’t a consequence worthy of consideration, because after the person’s death they aren’t alive anymore, and all that matters is the state of the world after their death. Did I get that right?
That puzzles me. It seems that by this reasoning, I can just as readily conclude if the universe will ultimately achieve a maximum-entropy condition, then a consequentialist must conclude that all actions are ultimately equally moral, since the “final outcome” will be identical.
My point is that this what I meant by expanding the definition of “consequences” here.
That is the usual meaning; at least, I thought it was. Perhaps what we have here is a sound/sound dispute.
I dunno.
At the risk of repeating myself: it seems to me that if action A results in a year of my life followed by the eradication of all traces of my existence, and action B results in two years of my life followed by the eradication of all traces of my existence, then if I consider years of my life an important differential consequence with which to evaluate the morality of actions at all, I should prefer B to A since it creates an extra year of my life, which I value.
The fact that the state of the world after two years is identical in both branches of this example isn’t the only thing that matters to me, or even the thing that matters most to me.
For my own part, I don’t see how that makes “consequences” a meaningless term, and I can’t see why anyone for whom the only consequences that matter are the “final” outcome should be a consequentialist, or care about consequences at all.
Again, I suspect this is a terminological confusion—a confusion over what “consequentialism” actually means caring about.
To you—and me—a “consequence” includes the means, the end, and any inadvertent side-effects. Any result of an action.
To Eugine, and some others, it includes the end, and any inadvertent side-effects; but apparently the path taken to them, the means, is not included. I can see how someone might pick up this definition from context, based on some of the standard examples. I’ve done similar things myself with other words.
(As a side note, I have also seen it assumed to include only the end—the intended result, not any unintended ones. This is likely due to using consequentialism to judge people, which is not the standard usage but common practice in other systems.)
Perhaps not coincidentally, I have only observed the latter two interpretations in people arguing against consequentialism, and/or the idea that “the ends justify the means”. If you’re interested, I think tabooing the terms involve might dissolve some of their objections, and you both may find you now disagree less than you think. But probably still a bit.
As I understand Eugine, he’d say that in my example above there’s no consequentialist grounds for choosing B over A, since in two years the state of the world is identical and being alive an extra year in the interim isn’t a consequence that motivates choosing B over A.
If I’ve understood properly, this isn’t a terminological confusion, it’s a conflict of values. If I understood him correctly, he thinks it’s absurd to choose B over A in my example based on that extra year, regardless of whether we call that year a “consequence” or something else.
That’s why I started out by requesting some clarification of a key term. Given the nature of the answer I got, I decided that further efforts along these lines would likely be counterproductive, so I dropped it.
Right, as a reductio of choosing based on “consequentialist grounds”. His understanding of “consequentialist grounds”.
Sorry, I’m not following.
A reductio argument, as I understand it, adopts the premise to be disproved and shows how that premise leads to a falsehood. What premise is being adopted here, and what contradiction does it lead to?
Um, the premise is that only “consequences” or final outcomes matter, and the falsehood derived is that “creating a person and then killing him is morally equivalent to not creating him in the first place because the consequences are the same”.
But it looks like there may be an inferential distance between us? Regardless, tapping out.
That’s your privilege, of course. Thanks for your time.
My understanding of consequentialism is similar to yours and TheOtherDave’s. In a chain of events, I consider all events in the chain to be a consequence of whatever began the chain, not just the final state.
I can’t, to be honest. Pretty much all the standard examples that I can think of relating to consequentialism fall into one of two categories: first, thought experiments aimed at forcing counterintuitive behavior out of some specific dialect of utilitarianism (example: the Repugnant Conclusion); and second, thought experiments contrasting some noxious means with a desirable end (example: the trolley problem).
Biting the bullet on the latter is a totally acceptable response and is in fact one I endorse; but I can’t see how you can look at e.g. the trolley problem and conclude that people biting that bullet are ignoring the fat man’s life; its loss is precisely what makes the dilemma a dilemma. Unless I totally misunderstand what you mean by “means”.
Now, if you’re arguing for some non-consequential ethic and you need some straw to stuff your opponent with… that’s a different story.
They’re not ignoring his life, they’re counting it as 1 VP (Victory Point) and contrasting with the larger number of VP’s they can get by saving the people on the track. The fact that you kill him directly is something your not allowed to consider.
Well, nothing in the definition of consequential ethics requires us to be looking exclusively at expected life years or pleasure or pain. It’s possible to imagine one where you’re summing over feelings of violated boundaries or something, in which case the fact that you’ve killed the guy directly becomes overwhelmingly important and the trolley problem would straightforwardly favor “do not push”. It’s just that most consequential ethics don’t, so it isn’t; in other words this feature emerges from the utility function, not the metaethical scheme.
(As an aside, it seems to me that preference utilitarianism—which I don’t entirely endorse, but which seems to be the least wrong of the common utilitarianisms—would in many cases weight the fat man’s life more heavily than that of a random bystander; many people, given the choice, would rather die by accident than through violence. It wouldn’t likely be enough to change the outcome in the standard 1:5 case, but it would be enough to make us prefer doing nothing in a hypothetical 1:1 case, rather than being indifferent as per total utilitarianism. Which matches my intuition.)
So you’re willing to allow summing over feelings of violated boundaries, but not summing over actual violated boundaries, interesting.
That was one example in a very large space of possibilities; you can differentiate the consequences of actions in any way you please, as long as you’re doing so in a well-behaved way. You don’t even need to be using a sum—average utilitarianism doesn’t.
This does carry a couple of caveats, of course. Some methods give much less pathological results than others, and some are much less well studied.
Summing over actual violated boundaries is also a possible consequentialism, but it does not seem to capture the intuitions of those deontological theories which disallow you to push the fat guy. Suppose the driver of the trolley is a mustache-twirling villain who has tied the other five people to the tracks deliberately to run the trolley over them (thus violating their boundaries). Deontologists would say this makes little difference for your choice in the dilemma, you are still not permitted to throw the fat man on the tracks to save them. This deontological rule cannot be mimicked with a consequentialism that assigns high negative value to boundary-violations regardless of agent. It can, perhaps, (I am not entirely sure) be mimicked with a consequentialism that assigns high negative value to the subjective feeling of violating a boundary yourself.
Well, most of the well known consequentialist dilemmas rely on forbidding considering the path, in fact not caring about is one of the premises of the VNM theorem.
As I said, “I can see how someone might pick up this definition from context, based on some of the standard examples.”
I don’t think it’s the intention of those examples, however—at least, not the ones that I’m thinking of. Could you describe the ones you have in mind, so we can compare interpretations?
I … think this is a misinterpretation, but I’m most definitely not a domain expert, so could you elaborate?
Well, caring about the path renders the independence axiom meaningless.
Really? Again, I’m not an expert, but …
How does saying that something positive-utility remains good independant of other factors, and something negative-utility remains bad, preclude caring about those other factors too? If it did, why would that only include “the path”, and not other things we care about, because other subsets of reality are good or bad independant of them too?
Don’t get me wrong; I understand that in various deontological and virtue ethics systems we wouldn’t care about the “end” at all if it were reached through incorrect “means”. Consequentialists reject this*; but by comparing the end and the means, not ignoring the means altogether! At least, in my limited experience, anyway.
Again, could you please describe some of the thought experiments you were thinking of?
*(although they don’t all care for independence as an axiom, because it doesn’t apply to instrumental goals, only terminal ones)
To take an extreme example, in the classic cannibal lifeboat scenario, the moral solution is generally considered to draw straws. That is, this is considered preferable to just eating Bill, or Tom for that matter, even though according to the independence axiom there should be a particular person among the participants sacrificing whom would maximize utility.
I don’t think that’s a consequentialist thought experiment, though? Could you give examples of how it’s illustrated in trolley problems, ticking time bomb scenarios, even forced-organ-donation-style “for the greater good” arguments? If it’s not too much trouble—I realize you’re probably not anticipating huge amounts of expected value here.
(I think most LW-style utilitarian consequentialists would agree there is probably an optimal one, but unilaterally deciding that yourself might lead to additional consequences—better to avoid selfish infighting and, most importantly, perceived unfairness, especially when you may be too uncertain about the outcomes anyway. So that’s a data point for you.)
What do you mean by “consequentialist thought experiment”?
Yes, you can always argue that any behavior is instrumental, replacing it with the reason it came to be thought of as moral, but if you go down that route, you’ll end up concluding the purpose of life is to maximize inclusive genetic fitness.
One of the standard thought experiments used to demonstrate and/or explain consequentialism. I’m really just trying to see what your model of consequentialism is based on.
Well, we’re adaptation-executors, not fitness-maximizers—the environment has changed. But yeah, there’s a very real danger in coming up with grandiose rationalizations for how all your moral intuitions are really consequences of your beautifully simple unified theory.
And there’s a very real danger of this being a fully general counterargument against any sufficiently simple moral theory.
You’re absolutely right about that. In fact, there’s a danger that it can be a fully general counterargument against any moral theory at all! After all, they might simply be rationalizing away the flaws...
I wouldn’t endorse using it as a counterargument at all, honestly. If you can point out actual rationalizations, that’s one thing, but merely calling someone a sophisticated arguer is absolutely a Bad Idea.
Well, as Eliezer explained here, simple moral systems are in fact likely to be wrong.
I think that’s one of the areas where Eliezer got it completely wrong. Value isn’t that complex, and it’s a mistake to take people’s apparent values at face value as he seems to.
Our values are psychological drives from a time in our evolutionary history before we could possibly be consequentialist enough to translate a simple underlying value into all the actions required to satisfy it. Which means that evolution had to bake in the “break this down into subgoals” operation, leaving us with the subgoals as our actual values. Lots of different things are useful for reproduction, so we value lots of different things. I would not have found that wiki article convincing either back when I believed as you believe, but have you read “Thou art godshatter?”
People have drives to value different things, but a drive to value is not the same thing as a value. For example, people have an in-group bias (tribalism), but that doesn’t mean that it’s an actual value.
If values are not drives (Note I am saying values are drives, not “driives are values”, “drives to value are values”, or anything else besides “values are drives”), what functional role do they play in the brain? What selection pressure built them into us? Or are they spandrels? If this role is not “things that motivate us to choose one action over another,” why are they motivating you to choose one action over another? If that is their role, you are using a weird definition of “drive”, so define “Fhqwhgads” as “things that motivate us to choose one action over another”, and substitute that in place of “value” in my last argument.
If values are drives, but not all drives are values, then… (a) if a value is a drive you reflectively endorse and a drive you reflectively endorse is a value, then why would we evolve to reflectively endorse only one of our evolved values? (b) otherwise, why would either you or I care about what our “values” are?
I agree that values are drives, but not all drives are values. I dispute that we would reflectively endorse more than one of our evolved drives as values. Most people aren’t in a reflective equilibrium, so they appear to have multiple terminal values—but that is only because they aren’t’ in a reflective equilibrium.
What manner of reflection process is it that eliminates terminal values until you only have one left? Not the one that I use (At least, not anymore, since I have reflected on my reflection process). A linear combination (or even a nonlinear combination) of terminal values can fit in exactly the same spot that a single value could in a utility function. You could even give that combination a name, like “goodness”, and call it a single value (though it would be a complex one). So there is nothing inconsistent about having several separate values.
Let me hazard a guess, based on my own previous reflection process, now abandoned due to meta-reflection. First, I would find a pair of thought experiments where I had strong feelings for an object-level choice in each, and I felt I was being inconsistent between them. Of course, object-level choices in two different scenarios can’t be inconsistent. There is a computation that returns both of those answers, namely, whatever was going on in your pre-reflection brain.
For example, “throw the level, redirect the trolley to kill 1 instead of 5” and “don’t butcher the healthy patient and steal their organs to save five.”
The inconsistence is in the two principles I would have automatically come up with to explain two different object-level choices. Or, if my reasons for one emotional reaction are too complicated for me to realize, then it’s between one principle and the emotional reaction. Of course, the force behind the principle comes from the emotional reaction to the thought experiment which motivated it.
Then, I would let the two emotions clash against each other, letting my mind flip between the two scenarios back and forth until one started to weaken. The winner would become stronger, because it survived a clash. And so did the principle my mind coughed up to explain it.
What are the problems with this?
It favors simple principles for the sole reason that they are easier to guess by my conscious mind, which of course doesn’t really have access to the underlying reasons. It just thinks it does. This means it depends on my ignorance of other more complicated principles. This part can be destroyed by the truth.
The strength of the emotion for the object-level choice is often lent to the principle by something besides what you think it is. Yvain covered this in an essay that you, being a hedonistic utilitarian would probably like: Wirehead Gods on Lotus Thrones. His example is that being inactive and incredibly happy without interuption forever sounds good to him if he thinks of Buddhists sitting on lotuses and being happy, but bad if he thinks of junkies sticking needles in their arms and being happy. With this kind of reflection, you consciously think something like: “Of course, sitting on the lotus isn’t inherently valuable, and needles in arms aren’t inherently disvaluable either,” but unconsciously, your emotional reaction to that is what’s determining which explicit principles like “wireheading is good” or “wireheading is bad” you consciously endorse.
All of your standard biases are at play in generating the emotional reactions in the first place. Scope insensitivity, status quo bias, commitment bias, etc.
This reflection process can go down different paths depending on the order that thought experiments are encountered. If you get the “throw switch, redirect trolley” one first, and then are told you are a consequentialist, and that there are other people who don’t throw the switch because then they are personally killing someone, and you think about their thought process and reject it as a bad principle, and then you see the “push the fat man off the bridge” one, and you think “wow, this really feels like I shouldn’t push him off the bridge, but [I have this principle established where I act to save the most lives, not to keep my hands clean]”, and slowly your instinct (like mine did) starts to become “push the fat man off the bridge.” And then you hear the transplant version, and you become a little more consequentialist. And so on. It would be completely different if most people heard the transplant one first (or an even more deontology-skewed thought experiment). I am glad of course, that I have gone down this path as far as I have. Being a consequentialist has good consequences, and I like that! But my past self might not have agreed, and likewise I probably won’t agree with most possible changes to my values. Each version of me judges differences between the versions under its own standards.
There’s the so called sacred vs. secular value divide (I actually think it’s more of a hierarchy, with several layers of increasing sacredness, each of which feels like it should lexically override the last), where pitting a secular value vs a sacred value makes the secular value weaker and the sacred one stronger. But which values are secular or sacred is largely a function of what your peers value.
And whether a value becomes stronger or weaker through this process depends largely on which pairs of thought experiments you happen to think of. Is a particular value, say “artistic expression”, being compared to the value of life, and therefore growing weaker, or is it being compared to the value of not being offended, and therefore growing stronger?
So that you don’t ignore my question like you did the one in the last post, I’ll reiterate it. (And I’ll add some other questions). What process of reflection are you using that you think leads people toward a single value? Does it avoid the problems with my old one that I described? Is this a process of reflection most people would meta-reflectively endorse over alternative ones that don’t shrink them down to one value? (If you are saying that people who have several values are out of reflective equilibrium, then you’d better argue for this point.)
Edited: formatting.
I endorse the the process you rejected. I don’t think the problems you describe are inevitable. Given that, if people’s values cause them conflict in object-level choices, they should decide what matters more, until they’re at a reflective equilibrium and have only one value.
But how do you avoid those problems? Also, why should contemplating tradeoffs between how much we can get values force us to pick one? I bet you can imagine tradeoffs between bald people being happy, and people with hair being happy, but that doesn’t mean you should change your value from “happiness” to one of the two. Which way you choose in each situation depends on how many bald people there are, and how many non-bald people there are. Similarly, with the right linear combination, these are just tradeoffs, and there is no reason to stop caring about one term because you care about the other more. And you didn’t answer my last question. Why would most people meta-reflectively endorse this method of reflection?
1, as you said, can be destroyed by the truth (if they’re actually wrong), so it’s part of a learning process. 2 isn’t a problem once you isolate the principle by itself, outside of various emotional factors. 3 is a counterargument against any kind of decisionmaking, it means that we should be careful, not that we shouldn’t engage in this sort of reflection. 4 is the most significant of these problems, but again it’s just something to be careful about, same is in 3. As for 5, that’s to be solved by realizing that there are no sacred values.
It doesn’t, you’re right. At least, contemplating tradeoffs doesn’t by itself guarantee that people would choose only one value, But it can force people to endorse conclusions that would seem absurd to them—preserving one apparent value at the expense of another. Once confronted, these tensions lead to the reduction to one value.
As for why people would meta-reflectively endorse this method of reflection—simply, because it makes sense.
So what, on your view, is the simple thing that humans actually value?
Pleasure, as when humans have enough of it (wireheading) they will like it more than anything else.
(nods) Well, that’s certainly simple.
So it seems to follow that if I offer someone the choice of murdering their child in exchange for greater pleasure, and they turn me down, we can confidently infer that they simply don’t believe I’ll follow through on the offer, because if they did, they would accept. Yes?
Believing that there is no such thing as greater pleasure than the loss from having your child murdered, is a subset of “not believing you’ll follow through on your offer”.
Yes, that’s true. If you believe what I’m offering doesn’t exist, it follows that you ought not believe I’ll follow through on that offer.
I don’t think you’re following that to the logical conclusion, though. You were implicitly arguing that most people’s refusal would not be based on “doesn’t believe I’ll follow through”. It is entirely plausible that most people would give the reason which I described, and as you have admitted, the reason which I described is a type of “doesn’t believe I’ll follow through”. Therefore, your argument fails, because contrary to what you claimed, most people’s refusal would (or at least plausibly could) be based on “doesn’t believe I’ll follow through”.
I agree that most people’s refusal would be based on some version of “doesn’t believe I’ll follow through.”
I’m not clear on where I claimed otherwise, though… can you point me at that claim?
It’s true that you didn’t explicitly claim people wouldn’t do that, but in context, you did implicitly claim that. In context, you were responding to something you disagreed with and so it must mean that you thought that they would not in fact do that and you were presenting the claim that they would not do that to support your argument.
https://en.wikipedia.org/wiki/Implicature https://en.wikipedia.org/wiki/Cooperative_principle
I see.
OK. Thanks for clearing that up.
Someone recently suggested that there should be a list of 5 geek linguistic fallacies and I wonder if something like this should go in the list.
Your response seems very strange because either you meant to imply what you implied (in which case you thought you could misrepresent yourself as not implying anything), or you didn’t (in which case you said a complete non-sequitur that by pure coincidence sounded exactly like an argument you might have made for real)
What response were you expecting?
My original question was directed to blacktrance, in an attempt to clarify my understanding of their position. They answered my question, clarifying the point I wanted to clarify; as far as I’m concerned it was an entirely sucessful exchange.
You’ve made a series of assertions about my question, and the argument you inferred from it, and various fallacies in that argument. You are of course welcome to do so, and I appreciate you answering my questions about your inferences, but none of that requires any particular response on my part as far as I can tell. You’ve shared your view of what I’m saying, and I’ve listened and learned from it. As far as I’m concerned that was an entirely successful exchange.
I infer that you find it unsatisfying, though. Well, OK. Can you state what it is you’re trying to achieve in this exchange, and how I can help you achieve it?
It appeared that you’re either willfully deceptive or incapable of communicating clearly, in such a way that it looks willfully deceptive. I was hoping you’d offer another alternative than those.
The other alternative I offer is that you’ve been mistaken about my goals from the beginning.
As I said a while back: I asked blacktrance a question about their working model, which got me the information I wanted about their model, which made it clear where our actual point of disagreement was (specifically, that blacktrance uses “values” to refer to what people like and not what we want). I echoed my understanding of that point, they agreed that I’d understood it correctly, at which point I thanked him and was done.
My goal was to more clearly understand blacktrance’s model and where it diverged from mine; it wasn’t to challenge it or argue a position. Meanwhile, you started from the false assumption that I was covertly making an argument, and that has informed our exchange since.
If you’re genuinely looking for another alternative, I recommend you back up and examine your reasons for believing that.
That said, I assume from your other comments that you don’t believe me and that you’ll see this response as more deception. More generally, I suspect I can’t give you want you want in a form you’ll find acceptable.
If I’m right, then perhaps we should leave it at that?
No, for a few reasons. First, they may not believe that what you’re offering is possible—they believe that the loss of a child would outweigh the pleasure that you’d give them. They think that you’d kill the child and give them something they’d enjoy otherwise, but doesn’t make up for losing a child. Though this may count as not believing that you’ll follow through on your offer. Second, people’s action-guiding preferences and enjoyment-governing preferences aren’t always in agreement. Most people don’t want to be wireheaded, and would reject it even if it were offered for free, but they’d still like it once subjected to it. Most people have an action-guiding preference of not letting their children die, regardless of what their enjoyment-governing preference is. Third, there’s a sort-of Newcomblike expected value decision at work, which is that deriving enjoyment from one’s children requires valuing them in such a way that you’d reject offers of greater pleasure—it’s similar to one-boxing.
Ah, OK. And when you talk about “values”, you mean exclusively the things that control what we like, and not the things that control what we want.
Have I got that right?
That is correct. As I see it, wants aren’t important in themselves, only as far as they’re correlated with and indicate likes.
OK. Thanks for clarifying your position.
How would you test this theory?
Give people pleasure, and see whether they say they like it more than other things they do.
This begs the question of whether the word “pleasure” names a real entity. How do you give someone “pleasure”? As opposed to providing them with specific things or experiences that they might enjoy? When they do enjoy something, saying that they enjoy it because of the “pleasure” it gives them is like saying that opium causes sleep by virtue of its dormitive principle.
Do you mean “forcibly wirehead people and see if they decide to remove the pleasure feedback”? Also, see this post.
That’s one way to do it, but not the only way, and it may not even be conclusive, because people’s wants and likes aren’t always in agreement. The test is to see whether they’d like it, not whether they’d want it.
Establishing a lower bound on the complexity of a moral theory that has all the features we want seems like a reasonable thing to do. I don’t think the connotations of “fully general counterargument” are appropriate here. “Fully general” means you can apply it against a theory without really looking at the details of the theory. If you have to establish that the theory is sufficiently simple before applying the counterargument, you are referencing the details of the theory in a way that differentiates from other theories, and the counterargument is not “fully general”.
“This theory is too simple” is something that can be argued against almost any theory you disagree with. That’s why it’s fully general.
No, it isn’t: Anyone familiar with the linguistic havoc sociological theory of systems deigns to inflict on its victims will assure you of that!
Ok, so what’s an example of something that doesn’t count as a “consequence” by your definition?
Beats me. Why does that matter?
To be more precise: given two possible actions A and B, which lead to two different states of the world Wa and Wb, all attributes of Wa that aren’t attributes of Wb are consequences of A, and all attributes of Wb that aren’t attributes of Wa are consequences of B, and can motivate a choice between A and B.
Some attributes shared by Wa and Wb might be consequences of A or B, and others might not be, but I don’t see why it matters for purposes of choosing between A and B.
Ok, now you’re hiding the problem in the word “attribute” and to a certain extent “state of the world”, e.g., judging by your reaction to my previous posts I assume “state of the world” includes the world’s history, not just its state at a given time. Does it all include contrafactual states, a la, contrafactual mugging?
Well, I’d agree that there’s no special time such that only the state of the world at that time and at no other time matters. To talk about all times other than the moment the world ends as “the world’s history” seems a little odd, but not actively wrong, I suppose.
As for counterfactuals… beats me. I’m willing to say that a counterfactual is an attribute of a state of the world, and I’m willing to say that it isn’t, but in either case I can’t see how a counterfactual could be an attribute of one state of the world and not another. So I can’t see why it matters when it comes to motivating a choice between A and B.
So what do you do on counterfactual mugging, or Newcomb’s problem for that matter?
Newcomb-like problems: I estimate my confidence (C1) that I can be the sort of person whom Omega predicts will one-box while in fact two-boxing, and my confidence (C2) that Omega predicting I will one-box gets me more money than Omega predicting I will two-box. If C1 is low and C2 is high (as in the classic formulation), I one-box.
Counterfactual-mugging-like problems: I estimate how much it will reduce Omega’s chances of giving $10K to anyone I care about if I reject the offer. If that’s low enough (as in the classic formulation), I keep my money.
The fact that the fundamental laws of physics are time-reversible makes such variations on the 1984-ish theme of “we can change the past” empirically wrong.
???
One of these cases involves the consequence that someone gets killed. How is that not morally neutral?
For the consequentialist to actually start torturing children for this reason, he would have to know, to a high degree of certainty, that the utility function is maximized by torturing children. It may be that, given that he doesn’t have perfect knowledge, he is incapable of knowing that to the required degree. This would mean that he remains a consequentialist but could not be induced to torture children.
Edit: There’s also the possibility that his decision affects how other people make decisions, which is itself a sort of consequence that has to be weighed. If many of the people around him are deontologists, torturing children may have the side effect of making torturing children more acceptable to the deontologists around him, leading to those deontologists torturing children in cases that have bad consequences.
That you can pick hypothetical conditions where your deontological intuition is satisfied by your “utility function” tells us nothing about the situations where the intuition is in direct conflict with your “utility function”.
Let’s make this simple: if you were certain your utility function was maximized by torturing children, would you do it?
As a side note, the topic seems to be utilitarianism, not consequentialism. The terms are not interchangeable.
I am not Omega. I can’t be “certain”.
EDIT: Sorry, turns out you already answered my question. Here are some replacement questions.
You’ve said that you will do nothing, rather than violate a right in order to prevent other rights being violated. Yet you also say that people attempting to violate rights waive their rights not to be stopped. Is this rule designed for the purpose of allowing you to violate people’s rights in order to protect others? That seems unfair to people in situations where there’s no clearly identifiable moustache-twirling villain.
You have also said that people can waive any of their rights—for example, people waive their right not to have sex in order to have sex, and people waive their right not to be murdered in order to commit suicide. Doesn’t this deny the existence of rape within marriage? Isn’t it, in fact, the exact argument that was used to oppose laws prohibiting rape within marriage? This seems worrying. (Obviously, there are other, similar cases that can be constructed, but this one is a major problem.)
Finally, you mention that some actions which do not violate rights are nonetheless “being a dick”, and you will act to prevent and punish these acts in order to discourage them. Doesn’t this imply that there are additional aspects to morality not contained by “rights”? Do you act as a Standard-LessWrong-Consequentialist-Utilitarian™ with regards to Not Being A Dick?
I wish everyone in this thread would be more careful about using the word “right”. If you are trying to violate somebody’s rights, you don’t have “a right not to be stopped”. You have your perfectly normal complement of rights, and some of them are getting in the way of protecting someone else’s rights, so, since you’re the active party, your (contextually relevant) rights are suspended. They remain in effect out of that context (if you are coming at me with a knife I may violently prevent you from being a threat to me; I may not then take your wallet and run off cackling; I may not, ten years later, visit you in prison and inform you that your mother is dead when she is not; etc.).
That’s a good question, but the answer is no. A marriage does not constitute a promise to be permanently sexually available. You could opt to issue standing permission, and I gather this was customary and expected in historical marriages, but you can revoke it at any time; your rights are yours and you may assert them at will. I don’t object to people granting each other standing permission to do things and sticking with it if that’s how they prefer to conduct themselves, but morally speaking the option to refuse remains open.
No. There’s morality, and then there’s all the many things that are not morality. Consequentialists (theoretically, anyway) assign value to everything and add it all up according to the same arithmetic—with whatever epicycles they need not to rob banks and kidnap medical test subjects—but that’s not what I’m doing. Morality limits behavior in certain basic ways. You can be a huge dick and technically not do anything morally wrong. (And people can get back at you all kinds of ways, and not technically do anything morally wrong! It’s not a fun way to live and I don’t really recommend it.)
No. Actually, you could probably call me sort of virtuist with respect to dickishness. I am sort of Standard-LessWrong-Consequentialist-Utilitarian™ with respect to prudence, which is a whole ’nother thing.
Well, sure. I did read your explanation(s). I was assuming the worst-case scenario for the hypothetical, where you have to violate someone’s rights in order to protect others. For example, the classic lying-to-the-nazis-about-the-jews scenario.
Not anymore, no. Because we changed the rules. Because of all the rapes.
I … see, that seems consistent. I assume you can waive the right to abolish an agreement at will, too? That’s the foundation of contract law, but I don’t want to assume.
Indeed, and that’s what I’m asking about. What is this “don’t be a dick” function, and what place does it hold?
Huh. Shame.
… what’s “prudence” in your nomenclature? I haven’t seen the term as you use it.
If you are trying to violate someone’s rights, then your contextually relevant rights are forfeited. For example, the Nazi has forfeited the right not to be lied to.
I’m not sure I understand.
Well, some people are motivated to avoid being dicks, and might value information about how to do it. It’s not very ontologically special.
To me, it looks like consequentialists do prudence exclusively, and name it “morality”, instead of actually doing any morality. Prudence is arranging for things to be how you would like them to be.
Yes, I know. Hence my question:
You see?
Can I waive my right to un-waive a right? For example, if I waive my right not to work for someone, can I also waive my right to change my mind? As in, a standard contract?
I would assume so, but hey, it can’t hurt to ask.
Are you, personally, motivated to discourage dickish behaviour? You are, right? You mentioned you would sue someone for being a dick to you, if it were illegal, even if it was perfectly moral for them to do so.
… as long as “how you would like them to be” doesn’t violate any rights. I think I see what you mean by that.
… ah, and “how you would like things to be” includes “no dickishness”, right?
I do not see; can you start that line of inquiry over completely for me?
I haven’t actually thought about this before, but my instinct is no. Although if you arrange for it to be really hard to communicate a change of mind, someone who went by their last communication with you might not be doing anything wrong, just making a sincere mistake.
I try to minimize it from and around myself. I am not on a Global Dickishness Reduction Campaign of any kind. Maybe I should have said not that I’m not a consequentialist about it, but rather that I’m neither agent- nor recipient-neutral about it? How I would like things to be certainly refers to dickishness.
Sure, I can rephrase.
You’ve said:
You also say that attackers lose their contextually relevant rights, so you can violate their rights in order to defend others.
My original question was, doesn’t that feel like a patch to allow you to act like a consequentialist when it’s clear you have to?
Isn’t that unfair to people in situations where there is no attacker for you to focus on, where relieving their suffering is not a matter of taking out a convenient target?
Also, I just realized this, but doesn’t that mean you should be really concerned about any laws punishing things that don’t violate rights, since those criminals haven’t waived their rights under your system? For example, suing someone for violating your “right to privacy” by publicising a photo taken of you in a public place.
Huh. This universal right to change your mind about commitments seems like the most radical part of your philosophy (although obviously, you’re more tentative about endorsing it) - I noticed you endorsed the right of private individuals to secede from the state.
Yeah, you mentioned being a sort of virtue ethicist here … you would vote for (prudent) anti-dickishness laws, that sort of thing?
No. It doesn’t feel consequentialist to me at all. It’s a patch, but it’s not a consequentialist-flavored patch.
Example?
I am concerned about that, see elsewhere re: guaranteed exit, consent of the governed, etc. etc. blah blah blah.
If you specify that something’s prudent I’m pretty likely to vote for it even if it doesn’t affect dickishness in particular. Yay, prudence!
Yeah, I didn’t understand your position as well when I asked that.
How about tragedies of the commons, for example?
Most countries do not allow criminals to escape punishment by revoking their consent to be punished.
Hmm, good point. Effective anti-dickishness, laws, then. (Not that I expect you to change your answer.)
Tragedies of the commons are a coordination problem. My system can cover them if there’s some kind of complex ownership or promise-keeping involved, but doesn’t handle implicit commonses.
Yeah, I’m aware.
I would vote for effective anti-dickishness laws all else being equal but might prioritize reduced dickishness below other things if there were tradeoffs involved.
Well, fair enough, as long as you’re aware of it.
So isn’t there an inconsistency here? Any law punishing something merely dickish, rather than rights violations, is itself a rights violation … right? Or can large-scale prudence outweigh a few minor rights violations?
ETA: That seems reasonable, honestly; even if it does taint the purity of your deontological rules, it only becomes important when determining large-scale policies, which is when you’d want a more precise accounting to shut up and multiply with.
Blah blah guarantee of exit yada yada consent of the governed blah blah I’m really sick now of re-explaining that I am aware of the tensions in my view when it comes to governance and I’m not going to do it anymore.
Alicorn, you just acknowledged that most people being punished are not asked whether they consent to it.
Indeed, attempting to use one’s “guarantee of exit” in these situations is often itself a crime, and one carrying punishments you classify as “rights violations” if I understand you correctly.
That’s sort of why I commented on the potential issues this introduces?
People in the real world do not have guarantee of exit. I’m aware of that. I’ve been over this topic elsewhere in thread more times than I wish to have been.
So … I’m sorry, are you saying you’re actually against these laws, but where rather saying that you would be in favour of them in an ideal world? I appear to have misunderstood you somewhat, so perhaps these repetitions and rephrasing are not in vain.
Thank you for your patience, I know how frustrating it is dealing with inferential gaps and the illusion of clarity better than most :)
It has run out. There is no more patience. It has joined the choir invisible. I said so a couple comments upstream and got downvoted for it; please don’t ask me again.
Well, fair enough. While I’m disappointed not to be able to further improve my understanding of your beliefs, I treasure the LessWrong custom of tapping out of conversations that are no longer productive.
Have a nice day, and may you continue to improve your own understanding of such matters :)
(I think you were actually downvoted for missing the point of my response, by the way. I certainly hope that’s the reason. It would be a great shame if people started downvoting for “tapping out” statements.)
Non-central fallacy.
No, it’s really not.
In fact, it’s precisely the opposite. The central feature of rapes we care about is the fact that they are extremely unpleasant, to put it politely. “Consent”, when formalized so that it no longer captures the information we care about, is noncentral.
Or at least, I think it is. In fact, I believe this should also be clear to Alicorn, on reflection (and indeed she has an explanation for why her system doesn’t fall into this trap.)
Do you disagree?
No, the central issue is in fact consent (there also also other issues related to sex and marriage but that discussion involves more inferential distance then I’m willing to bridge right now.) One way to see this is that it is still considered rape if the victim was unconscious and thus not capable of experiencing anything, pleasant or otherwise. Also, if someone consented to sex at the time but later decides she didn’t enjoy it, I assume you wouldn’t allow her to retroactively declare it rape.
Utilitarians do this. Consequentialists don’t necessarily.
Not only utilitarians, but von Neumann-Morgenstern rational agents.
Well, sure. That wasn’t meant to be an exhaustive list; I only meant to highlight that consequentialism does not necessarily do this.
But yes, you’re quite right.
“No. There’s morality, and then there’s all the many things that are not morality.”
Is this only a linguistic argument about what to call morality? With ,e.g. , virtue ethics claiming that all areas of life are part of morality, since ethics is about human excellence, and your claim that ethics only has to do with obligations and rights? Is there a reason you prefer to limit the domain of morality? Is there a concept you think gets lost when all of life is included in ethics (in virtue ethics or utilitarianism)?
Also, could you clarify the idea of obligations, are then any obligations which don’t emanate from the rights of another person? Are there any obligations which emerge inherently from a person’s humanity and are therefore not waivable?
You could re-name everything, but if you renamed my deontological rules “fleeb”, I would go on considering fleeb to be ontologically distinct in important ways from things that are not fleeb. I’m pretty sure it’s not just linguistic.
Because there’s already a perfectly good vocabulary for the ontologically distinct non-fleeb things that people are motivated to act towards—“prudence”, “axiology”.
Unassailable priority. People start looking at very large numbers and nodding to themselves and deciding that these very large numbers mean that if they take a thought experiment as a given they have to commit atrocities.
Yes; I have a secondary rule which for lack of better terminology I call “the principle of needless destruction”. It states that you shouldn’t go around wrecking stuff for no reason or insufficient reason, with the exact thresholds as yet undefined.
“Humanity” is the wrong word; I apply my ethics across the board to all persons regardless of species. I’m not sure I understand the question even if I substitute “personhood”.
Lets take truth telling as an example. What is the difference between saying that there is an obligation to tell the truth, or honesty being a virtue or that telling the truth is a terminal value which we must maximize in a consequentialist type equation. Won’t the different frameworks be mutually supportive since obligation will create a terminal value, virtue ethics will show how to incorporate that into your personality and consequentialism will say that we must be prudent in attaining it? Similarly prudence is a virtue which we must be consequentialist to attain and which is useful in living up to our deontological obligations. and justice is a virtue which emanates from the obligation not to steal and not to harm other people and therefore we must consider the consequences of our actions so that we don’t end up in a situation where we will act unjust.
I think I am misunderstanding something in your position, since it seems to me that you don’t seem to disagree with consequentialism in that we need to calculate, but rather in what the terminal values are (with utilitarianism saying utility is the only terminal value and you saying hat there are numerous (such as not lying , not stealing not being destructive etc.))
By obligations which emerge from a person’s personhood which are not waivable, I mean that they emerge from the self and not in relation to another’s rights and therefore can not be waived. To take an example (which I know you do not consider an obligation, but will serve to illustrate the class since many people have this belief) A person has an obligation to live out their life as a result of their personhood and therefore is not allowed to commit suicide since that would be unjust to the self (or nature or god or whatever)
The first thing says you must not lie. The second thing says you must not lie because it signifies or causes defects in your character. The third thing says you must not lie unless there is a compensatory amount of something else encouraging you to lie. The systems really don’t fuse this prettily unless you badly misunderstand at least two of them, I’m afraid. (They can cooperate at different levels and human agents can switch around between implementing each of them, but on a theoretical level I don’t think this works.)
Absolutely not. Did you read Deontology for Consequentialists?
I still don’t know what you mean by “emerge from the self”, but if I understand the class of thing you’re pointing out with the suicide example, I don’t think I have any of those.
Yes I read that post, (Thank you for putting in all this time clarifying your view)
I don’t think you understood my question. since “The third thing says you must not lie unless there is a compensatory amount of something else encouraging you to lie. ” is not viewing ‘not lying’ as a terminal value but rather as an instrumental value. a terminal value would mean that lying is bad not because of what it will lead to (as you explain in that post), but if that is the case, must I act in a situation so as not to be forced to lie. For example, lets say you made a promise to someone not to get fired in your first week at work, and if the boss knows that you cheered for a certain team he will fire you, would you say that you shouldn’t watch that game since you will be forced to either lie to the boss or to break your promise of keeping your job? (Please fix any loopholes you notice, since this is only meant for illustration) If so it seems like the consequentialist utilitarian is saying that there is a deontological obligation to maximize utility, and therefore you must act to maximize that, whereas you are arguing that there are other deontological values, but you would agree that you should be prudent in achieving your deontological obligations. (we can put virtue ethics to the side if you want, but won’t your deontological commitments dictate which virtues you must have, for example honesty, or even courage, so as to act in line with your deontological obligations)
That’s a very long paragraph, I’m going to do my best but some things may have been lost in the wall of text.
I understand the difference between terminal and instrumental values, but your conclusion doesn’t follow from this distinction. You can have multiple terminal values. If you terminally value both not-lying and also (to take a silly example) chocolate cake, you will lie to get a large amount of chocolate cake (where the value of “large” is defined somewhere in your utility function). Even if your only terminal value is not-lying, you might find yourself in an odd corner case where you can lie once and thereby avoid lying many times elsewhere. Or if you also value other people not lying, you could lie once to prevent many other people from lying.
AAAAAAAAAAAH
It is prudent to be prudent in achieving your deontological obligations. Putting “should” in that sentence flirts with equivocation.
I think it’s possible to act completely morally acceptably according to my system while having whopping defects of character that would make any virtue ethicist blush. It might be unlikely, but it’s not impossible.
Thank you, I think I understand this now.
To make sure I understand you correctly. are these correct conclusions from what you have said? a. It is permitted (i.e. ethical) to lie to yourself (though probably not prudent) b. It is permitted (i.e. ethical) to act in a way which will force you to tell a lie tomorrow c. It is forbidden (i.e. unethical) to lie now to avoid lying tomorrow (no matter how many times or how significant the lie in the future) d. The differences between the systems will only express themselves in unusual corner cases, but the underlying conceptual structure is very different
I still don’t understand your view of utilitarian consequentialism, if ‘maximizing utility’ isn’t a deontological obligation emanating from personhood or the like, where does it come from?
A, B, and C all look correct as stated, presuming situations really did meet the weird criteria for B and C. I think differences between consequentialism and deontology come up sometimes in regular situations, but less often when humans are running them, since human architecture will drag us all towards a fuzzy intuitionist middle.
I don’t think I understand the last paragraph. Can you rephrase?
Why don’t you view the consequentialist imperative to always seek maximum utility as a deontological rule? If it isn’t deontological where does it come from?
The imperative to maximize utility is utilitarian, not necessarily consequentialist. I know I keep harping on this point, but it’s an important distinction.
Edit: And even more specifically, it’s total utilitarian.
Keep up the good work. Any idea where this conflation might have come from? It’s widespread enough that there might be some commonly misunderstood article in the archives.
I don’t know if it’s anything specific… classic utilitarianism is the most common form of consequentialism espoused on Lesswrong, I think, so it could be as simple as “the most commonly encountered member of a category is assumed to represent the whole category”.
It could also be because utilitarianism was the first (?) form of consequentialism to be put forth by philosophers. Certainly it predates some of the more esoteric forms of consequentialism. I’m pretty sure it’s also got more famous philosophers defending it, by rather a large margin, than any other form of consequentialism.
It’s VNM consequentialist, which is a broader category then the common meaning of “utilitarian”.
To me, it looks like consequentialists care exclusively about prudence, which I also care about, and not at all about morality, which I also care about. It looks to me like the thing consequentialists call morality just is prudence and comes from the same places prudence comes from—wanting things, appreciating the nature of cause and effect, etc.
Thank you for all of your clarifications, I think I now understand how you are viewing morality.
Could you elaborate on what this thing you call “morality” is?
To me, it seems like the “morality” that deontology aspires to be, or to represent / capture, doesn’t actually exist, and thus deontology fails on its own criterion. Consequentialism also fails in this sense, of course, but consequentialism does not actually attempt to work as the sort of “morality” you seem to be referring to.
What good are their rights to anyone who has starved to death?
Yes, and do you have an reason why this is in fact not a valid conclusion? Or this is an appeal to what the law happens to say today?
I, personally, find this situation morally repugnant. Psychological unity of mankind leads me to hope Alicorn does too. What more justification could you ask?
However, even though signing a contract does not seem to remove the harm of rape, I of course cannot rule out the possibility that I am picturing the situation incorrectly, or that that the benefits would not outweigh the rape. (Yes, Alicorn has stated that they care about harms outside their framework of rights.)
Alicorn, on the other hand, likely already holds the standard opinion on rape (it is bad), and thus would find a certain inconsistency in endorsing a position that was OK with it. So in that sense, yes, the law today is valid evidence that this might be an issue, if one looks at the causal change that led up to it changing.
Well, the fact that these laws were only passed very recently suggests that it is you who is out of step with the psychological unity.
The context appears to be a moral panic about rape that among other things argues for despising with due process for accused rapists, and that if two drunk people have sex and regret it later this means the man raped the woman. So no, the law today is not in fact valid evidence.
I was relying on the framing; obviously I wouldn’t expect people to respond the same way in literally any context. (You’re right, I didn’t make that clear.)
Hmm. It is true that rapists are demonized, and this is sometimes extended past edge cases—but obviously, you are yourself relying on the fact that this is obvious nonsense to most people for your rhetorical point.
This seems more akin to similar affects that spring up around other major crimes—not that that makes it rational, of course, or implies that the laws genuinely have greater expected utility than their inverse.
I have no idea how to interpret this even semi-charitably. To me this translates to “I was relying on dark arts”.
I was relying/hoping that you weren’t sufficiently caught up in the panic to no longer recognize this as obvious nonsense. My point that relying on laws that have only been passed in the previous decade, especially given that there’s a moral panic involved, to be highly dubious.
Yes, similar effects have sprung up around other major crimes in the past. However, I believe that rapists is the moral panic du jour.
It’s less likely that someone will ignore facts that have been recently brought to their attention. You’re right, I wasn’t quite sure what word to use there, I may have screwed up.
With respect, have you ever said that to someone and had them respond “Well yeah, sure! Of course women can retroactively make anyone they’ve ever had sex with a rapist.”
I’m sure you’ve seen people endorse equivalent conclusions via deceptive wording, but the general response I would predict from all but a very small core would be “we don’t believe that!”
Hmm, perhaps. I would have gone for pedophiles myself (a term already spreading to include all creepily large age gaps, by the way), but this isn’t really a contest.
Psychological what of what? You mean “current societal norms based on a multitude of shifting cultural and biological determinants, subject to change”?
No, I don’t, as you’re well aware from our many, many lengthy discussions on the point.
I’ll note that my prediction in this case was correct, no?
Alicorn is part of the same WEIRD cultrue you are, so I don’t see how this is evidence that the belief is universal.
The belief is not universal. The ability to empathise with rape victims is (well, actually, we define our terms so as to exclude psychopaths and the like.) Also, yes, I share certain cultural assumptions and conventions, so I have reason to think Alicorn may respond the same way.
My model predicted Alicorn would react to this specific question about that belief, after living her sort of life, with a reluctance or outright refusal to bite the bullet and endorse rape. Not that every human ever would unilaterally endorse my particular belief about rape.
[Kawoomba—you have no way of knowing this—strenuously objects to my model of human nature, since it predicts human CEV coheres rather well, whereas they believe that ethics are (and should be?) entirely relative and largely determined by one’s circumstances. They like to jump on anything I say that even vaguely implies human morality is somehow universal. There are some quite comprehensive discussions of this point scattered throughout LessWrong.]
Whether the activity in question constitutes “rape” is precisely the question under discussion.
Actually, I don’t doubt that there are many characteristics that apply to a vast majority of humans, thanks e.g. to mirror neurons et al. As such, basing predictions on such common factors is quite valid. For example, predicting that someone who doesn’t eat for extended periods will experience a state called “being hungry”, and will show certain behavioral characteristics associated with that state.
I just dislike the term because it’s typically used not as a descriptor (“many humans show a propensity for X”, in the example: “if you don’t eat for a period of time, you will probably act in accordance to “being hungry”) but (invalidly) as prescriptive (“if you don’t eat for a period of time, you should act in accordance to “being hungry”). Getting an “ought” from an “is”, and all that.
I can predict with reasonably confidence that all participants in the current discussion are currently wearing clothes. Based on the “garmental unity of mankind”. Doesn’t mean they should.
If PUoM means there are shared desires, then the only way you could fail to get the “ought” from the “is”, is by denying that “oughts” have anything to do with desires, surely,
For one, if there are supposedly universally shared desires among a group of agents, and yet you find one agent among them who doesn’t share those, you’ve found a contradiction—those universally shared desires weren’t universally shared, after all.
For two, describing that a majority (or even a supermajority, or a super-duper-overwhelming majority) shares certain desires would be a description, not some majority-tyrannical edict for the outliers to conform to. For example: Stating that a majority of humans share heterosexual desires doesn’t mean that those who don’t somehow should, as well (just to go with the obvious applause-light example which is applicable in this case, it’s easy to come up with arbitrarily many other examples).
And don’t call me Shirley.
Facts about desires coinjoined with with an intuitively appealing Maxim, such as “everyone ought to maximize the satisfaction of everyone’s” desires” can imply oughts.
What is this “intuitively appealing” justification, and why would it be binding for those it doesn’t “intuitively appeal” to? It’s not my intuition.
You wrote that you believe that persons have rights. How do you determine what rights they have?
IANAlicorn, but, since I have the same belief, I’ll give it a shot. My imperfect introspection tells me that, since the world where people don’t have rights would quickly become unfair and full of suffering (and this has been repeatedly experimentally tested), I want to live in a world where I, my family or someone I can identify with would have less of a chance of being treated unfairly and made to suffer needlessly. Pretending that people have “unalienable rights” goes a long way toward that goal, so I want to believe it and I want everyone else to believe it, too. To dig deeper, I am forced to examine the sources for my desire for fairness and the origins of my empathy (imperfect though it is), and the available literature points to the mix of genetics and upbringing.
That sounds like rule utilitarianism, or a rule utilitarianism-like consequentialism, not like a deontological justification for human rights.
I suppose you are right. However, if you skip the introspection part, “people have rights” makes sense in most cases without having to worry about utilities. It’s the edge cases, like the trolley problem, which require deeper analysis.
I agree, but that’s all basically consequentialist.
A decent justification, but not very deontological. What I was curious about is how Alicorn determines what rights exist purely deontologically, without reference to consequences.
Since I’m no longer maybe going to write a thesis on it, mostly I don’t work on this a lot. Not lying, not stealing, and not attacking people does pretty good for everyday. There’s sort of an informal checklist when I think something might be a right—the rights have to be reasonably consistent with each other, they’re overwhelmingly negative rights and not positive ones, simpler ones are better, etc. This would be easier with a sample maybe-a-right but I haven’t examined any of those recently.
If I may offer one —
Suppose that I am photographed on the street outside a place that has a bad reputation (with some people). The photographer might publish the photo, which could lead viewers to believe bad things of me.
One acquaintance of mine, M, claims that I have a right to forbid the photographer from publishing this photo; I have the right to control publicity about me or the use of my image, even though the picture was taken in public.
Another acquaintance, B, claims that the photographer has a freedom-of-speech right to publish it, so long as they do not explicitly say anything false about me. B believes that it would be nice of the photographer to ask my permission, but that I do not have a right to this niceness.
Still another acquaintance, R, says that it depends on who I am: if I am the mayor of Toronto, I have no right to control photos of me, since my actions are of public interest; but if I am a privately employed engineer of no public reputation, then I do have that right.
Okay, I’ll walk through my process of apprehending and making a call on this situation. It looks like a good example, thanks for coming up with it.
The conflict here is between you, and the photographer—other persons in the story have opinions but aren’t directly involved. The steps are that there was an opportunity to ask you if it was okay to photograph you (which the photographer passed over), the decision to photograph you, the opportunity to ask you if it’s okay to publish it (which the photographer also passes over), and the decision currently at hand of whether to publish the photo. If there’s a rights violation potential or actual, it’s probably in one of those places. The statement of the problem doesn’t suggest that you’ve committed any rights violations by happening to be in this location.
The fact that two chances to ask for consent have been passed up is suspicious. It’s not a guarantee that something has gone wrong—the photographer is allowed to look at you without securing permission, for instance—but it’s a red flag. In the normal course of things, people waive some of their rights when asked explicitly or by other mechanisms all the time. People waive their right to refuse sex, for instance, on a per-occasion basis in order to have non-rape sex.
You don’t actually do anything in this story, except exist in a place that by stipulation you are quite entitled to be, so only the photographer might be committing a wrong here.
So the obvious candidate possibilities are that you have a right not to be photographed, or that you have the right not to have your likeness publicized, or that you have no such rights and the photographer may do as they like with the photograph provided no other wrongs (such as libel, a form of lying) are committed in so doing.
But earlier I said the photographer is allowed to look at you without permission. You’re in a public place, where anyone, including photographers as an unspecial class of anyone, may walk by. The upshot of a photograph is that others, too, may look at you. Any of them could have walked by in real time and seen you without wronging you. There’s no obvious mechanism by which a gap in time should change things. If one of the passersby had a photographic memory, that wouldn’t change the fact that they could look at you. Similarly, the fact that people who live far away from this location might not have had a chance to show up in person and espy your presence, or the information that anything going on might be of interest, doesn’t seem like it has anything to do with anything.
So it seems to me that the photographer is probably, at worst, being a dick. You do not have a right to prohibit someone from photographing you and publishing a photo of you unless something else is going on. (I feel like I should mention now that I hate this result. I don’t photograph well and definitely don’t like the idea of my likeness being used in any which way without my agreement. In fact, if the law allowed, I might even pursue someone who did this to me via legal routes, which—I hope obviously—are separate from ethical condemnation. In any event I’m not rigging this to suit myself.)
But supposing it were the other way around, your acquaintance R might still have a point. Assuming it’s known that elected politicians are treated differently with respect to the use of their photographs, pursuing a career as a politician might constitute waiving one’s right not to be photographed (if we had that right, which I concluded above we probably don’t). In this counterfactual, this would apply to the mayor of Toronto, but not people who became public figures without doing anything wrong (that would be a potential rights forfeiture, particularly if they are a threat to the photograph-viewing population) or choosing to become public figures on purpose.
Ok, this is quite a detailed response and I appreciate the thought that went into writing it. However, from my perspective, it raises more questions than it answers. For example, you say things like this:
But why not ? I’m not asking this just to be a contrarian. For example, many people believe that all of us have a fundamental right to privacy; this would imply that you do, in fact, have the right to “prohibit someone from photographing you and publishing a photo of you”. Presumably you disagree, but on what basis ?
Furthermore, you say that you
I don’t see how that works. If you believe that a person has a right to photograph you and publish the photo without your permission; and yet you are launching a legal challenge against him for doing so; then are you not engaging in an immoral attack ? Sure, it’s not a physical attack, but it’s an attack nonetheless, and previously you stated (assuming I understood you correctly) that attacks on people who have violated no rights are immoral.
Can you tell me where I lost you in the description of why I do disagree?
No. This isn’t how the right is framed. They don’t have a right to do it; no one must behave in a way to protect their ability to do so; I don’t have to stand still so they can get a clear shot, I don’t have to go out in public, if I happen to own a news outlet I don’t have to give them a platform, if I acquire a device that makes me look like a smudge of static to nearby cameras I can carry it around without this being morally problematic. (Perhaps unless I’m under some sort of agreement to be visible to security cameras, or something like that.) I just don’t have the right to not be photographed. Remember that rights are overwhelmingly negative. The fact that someone commits no wrong by doing X does not imply that others commit a wrong by making X inconvenient or impossible.
(You’re also being kind of overbroad in understanding my use of the word “attack”, which was intended broadly, but not so broadly as to include “seeking legal recompense in response to an upsetting, by stipulation illegal, behavior which does not happen to constitute a violation of any intrinsic moral rights”.)
You state that “you do not have a right to prohibit someone from photographing you”, but I don’t understand where this rule comes from. You expand on it in your explanation that follows, but again, I don’t fully understand your reasoning. You say:
That makes sense to me, in that your rule is consistent with the rest of your system. I may even agree that it’s a good idea from a consequentialist point of view. However, I still do not understand where the rule comes from. Is photographing you qualitatively different from murdering you (which would presumably be immoral), and if so, why ? Come to think of it, why are all rights negative ?
I may have misunderstood your goals. In launching a legal challenge, what do you hope to accomplish in terms of “recompense” ? Are you seeking to extract a fine from the photographer, or perhaps to restrict his freedom in some way, or both ?
Let’s say that you seek a fine, and you succeed. How is that different, morally speaking, from breaking into his house and stealing the money ? In one case you use a lockpick; in the other one you use a lawyer; but in the end you still deprive the photographer of some of his money. Why does one action count as an attack, and the other does not ?
Now that I think of it, perhaps you would consider neither action to be an attack ? Once again, I’m not entirely sure I understand your position.
Are all rights negative? What about, say, the right to life, or the right to -not-starving-to-death?
Many people seem pretty jazzed about the idea of a “right to marriage” or a “right to insert-euphemism-for-abortion-here”, based largely on the fact that our (as in, their and my) tribe considers the policies these imply applause lights. I have no idea what Alicorn thinks of this, though.
I’m fine with that kind of right existing in the legal sense and encourage all the ones you listed. I don’t think anyone has a fundamental moral obligation to feed you or perform abortions for you or conduct a marriage ceremony for you, though you can often get them to agree to it anyway, empirically, with the use of money.
If I may, I’m curious on what basis you consider those rights a good idea? Is it just a whim? Are you worried real rights might be violated?
I’m not usually in favour of calling all these various things “rights”, since it rather confuses things—as you’re probably aware—but I must admit the “right to not-starving-to-death” sounds important.
Are you saying you would be OK with letting people starve to death? Or am I misunderstanding?
I think those rights-in-the-legal-sense are good for political and social reasons. Basically, I think they’re prudent.
I don’t think I am doing something wrong, right now, by having liquid savings while there exist charities that feed the hungry, some of whom are sufficiently food-insecure for that to make a difference to their survival. I bite the bullet if the starving person happens to be nearby: this doesn’t affect their rights, and only rights have a claim on my moral obligations. I might choose to feed a starving person. I will support political policy that seems like it will get more starving people fed. I will tend to find myself emotionally distraught on contemplating that people are starving, so I’d resent the “OK with it” description. Also, when I have children, they will have positive claims on me and I will be morally obligated to see to it that they are fed. Other than that? I don’t think we have to feed each other. It’s super-erogatory.
I see. And what are those reasons?
Yeah, that’s a common problem with consequentialists. Obviously, we have various instincts about this and it’s both hard and dangerous to ignore them.
I’m actually somewhat pleased to hear that, because it’s not the first time a deontologist has told me that. I was too speechless to respond, and they changed their mind before I could find out more.
Ah, here we go. That’s good!
You do realize that sort of thing is usually part of what’s referred to by “morality”? So leaving it out seems … incomplete.
Postscript:
I’m not sure, but it may be that there’s something causing some confusion, by the way. I’ve seen it happen before in similar discussions.
There seem to be two functions people use “morality” for—judging people, and judging actions.
Consequentialists, or at least standard-lesswrong-utilitarian-consequentialists, resolve this by not judging people, except in order to predict them—and even then, “good” or “bad” tend to be counterproductive labels for people, epistemically.
Instead, they focus entirely on judging their actions—asking, which of my options is the correct one?
But I gather you (like a lot of people) do judge people—if someone violates your moral code, that makes them an acceptable casualty in defending the rights of innocents. (Although you’re not a total virtue ethicist, who only judges people, not actions; the polar opposite of LW standard.)
I’m not really going anywhere with this, it just seems possibly relevant to the discussion at hand.
I’m gonna decline to discuss politics on this platform. If you really want to talk politics with me we can do it somewhere else, I guess.
I would tend to describe myself as judging actions and not people directly, though I think you can produce an assessment of a person that is based on their moral behavior and not go too far wrong given how humans work.
Oh! Oh, OK. Sorry, I just assumed those were so vague as to avoid mindkilling. Of course we shouldn’t derail this into a political debate.
In fairness, you’re clearly smart enough to disregard all the obvious mistakes where we make prisons as awful as possible (or, more likely, just resist making them better) because we Hate Criminals.
This is a more abstract idea, and much more limited. I’m not criticising it (here). Just noting that I’ve seen it cause confusion in the past, if left unnoticed.
Actually, what the heck, while I’m here I may as well criticize slightly.
This is little more than my first thought on reviewing what I think I understand as your moral system.
It’s discovered Hitler’s brain was saved all those years ago. Given a giant robot body by mad scientists, he is rapidly re-elected and repurposes the police force into stormtroopers for rounding up Jews.
You, being reasonably ethical, have some Jews hidden in your house. Since the stormtroopers have violated relevant rights (actually, they could be new on the job, but you can just tell they’re thinging about it—good enough), so their relevant right not to be lied to is waived and you tell them “no Jews hidden here!” quite cheerfully before shutting the door.
However, naturally, they know Baye’s theorem and they’ve read some of your writings, so they know you’re allowed to lie and your words aren’t evidence either way—although silence would be. So they devise a Cunning Plan.
They go around next door and talk to Mrs. Biddy, a sweet old lady who is, sadly, unaware of the recent political shift. She was always told to respect police officers and she’s not going to stop now at her age. The white-haired old grandmother comes around to repeat the question the nice men asked her to.
She’s at the door, with the two stormtroopers standing behind her grinning. What do you do?
I mean, obviously, you betray the Jews and they get dragged off to a concentration camp somewhere. Can’t lie to the innocent old lady, can’t stay silent because that’s strong Bayesian evidence too.
But you’re reasonably intelligent, you must have considered an isomorphic case when constructing this system, right? Casualties in a Just War or something? Two people manipulated into attacking the other while dressed as bears (“so the bear wont see you coming”)? Do you bite this bullet? Have I gone insane from lack of sleep? I’m going to bed.
Why are all these hypothetical people so well-versed in one oddball deontologist’s opinions? If they’re that well-read they probably know I’m half Jewish and drag me off without asking me anything.
Mrs. Biddy sounds culpably ignorant to me, anyway.
You may or may not have gone insane from lack of sleep.
Um, the purpose of prisons is to punish criminals, so yes, prisons should be awful, not necessarily “as awful as possible”, but for sufficiently serious crimes quite possibly.
EDIT: Wait, you mean “punish” in the consequentialist sense of punishing defection, right?
Yes, but this does not imply that the current level of awfulness is optimal. It certainly does not mean we should increase the awfulness beyond the optimal level.
But if someone proposes that the current level is too high (whether on consequentialist or legal grounds), one of the arguments they will encounter is “you want to help rapists and murderers?! Why? Those bastards deserve it.”
(The consequentialist argument for, say, the current state of US prisons is of course undercut by the existence of other countries with much less awful prisons.)
If you want to look at optimal awfulness, there is a much better way to test, look at the crime rate. The currant crime rate is extremely high by historic standards. Furthermore, the recent drop from its peak in the 1970′s has accomplished by basically turning major cities into Orwellian surveillance states. I think increasing the awfulness of prisons would be a better solution, at the very least in puts the burden on the criminals rather than the innocent.
That really isn’t a good argument for the current state of US prisons, is it? Clearly, even openly allowing institutional rape has failed to help; yet other, less harsh countries have not seen soaring crime rates by comparison.
I’ve seen studies suggesting that certainty of punishment is much more important for determining behavior than the extremity of it—it’s more a question of a strong justice system, a respect for authority (or fear, one might say), than people performing expected utility calculation in their heads.
Personally I’m in favor of corporal punishment, cheaper than prisons and you don’t have the problem of long term prisoners getting used to it.
It is known that lots of people enjoy inflicting pain on the helpless. Anyone who punishes prisoners because they enjoy doing so is in a conflict of interest, at least if he has any discretion in how to carry out the punishment.
Also, it’s possible to take that effect into account when deciding punishment.
More so than existing prison guards?
I don’t know if it’s more so because comparing degrees here is hard, but I would say that we should not hire prison guards who enjoy punishing prisoners and have discretion in doing so.
That is one possible purpose to have prisons, but not the only one.
You can rephrase “punishing criminals” in terms of quasi-consequentialist decision theory as deterrent/counter-factual crime prevention. Al the other reasons I’ve heard are little more than rationalizations by people who want to punish/deter criminals but feel icky about the word “punishment”.
What possible reasons there could plausibly be for jailing people, and what actually in fact motivates most people to support jailing people, are not the same thing.
Some possibilities for the former include:
Retribution (i.e., punishing criminals because they deserve it)
Closure/satisfaction for the victim(s), or for family/friends of the victims(s).
Deterrence, i.e. protecting society from counterfactual future crimes we expect other people to otherwise perpetrate.
Protecting society from counterfactual future crimes we expect this same criminal to otherwise perpetrate.
Rehabilitation.
… (other things I am not thinking of at the moment)
None of those things are the same as any of the others. Some fit the rather imprecise term “punishment” closely (1, 2), others not so closely (3, 4), still others not at all (5).
I would argue that (1) and (2) are in fact the same thing just formulated at different meta-levels, and that (3) and (4) are the quasi-consequentialist decision theory “translations” of (1) and (2). Rehabilitation (5) is what I called a fake reason, as can be seen by the fact that the people promoting it are remarkably uninterested in whether their rehabilitation methods actually work.
I’m not entirely sure what you mean by this. Are you suggestions that people who advocate (3) and (4) as actual justifications for having prisons do not have those things as their true, internal motivations, but are only claiming them for persuasion purposes, and actually (1) and/or (2) are their real reasons? Or are you saying something else?
That may well be, but that doesn’t make it not an actual good reason to have prisons.
Your comment which prompted me to start this subthread spoke about what should be the case. If you say “this-and-such are the actual motivations people have for advocating/supporting the existance of prisons”, fine and well. But when you talk about what should happen or what should exist, then people’s actual internal motivations for advocating what should happen/exist don’t enter into it.
Something else, see my reply to hen. For where I go into more detail about this.
See hen’s comment for the problem I have with rehabilitation.
With respect, both hen’s comment and your reply read to me like nonsense. I can neither make sense of what either of you are saying, nor, to the degree that I can, see any reason why you would claim the things you seem to be claiming. Of course, I could merely be misunderstanding your points.
However, I think we have now gone on a tangent far removed from anything resembling the original topic, and so I will refrain from continuing this subthread. (I’ll read any responses you make, though.)
I think Eugine_Nier might be trying to say that the reason we evolved the emotions of anger and thirst for vengeance is because being known to be vengeful (even irrationally so) is itself a good deterrent. And possibly that this therefore makes these the same thing. But I’m not sure about that because that seems to me like a straightforward case of mixing up adaptation executors and fitness maximizers.
You mean hen’s comment about the dignity of moral agents, or my statement about how deterrence is the quasi-consequentialist translation retribution?
Both, I’m afraid.
To see what I mean by the dignity of moral agents think of a criminal as a moral agent, rather then a defective object to be fixed. The idea of rehabilitation should acquires a certain Orwellian/totalitarian aura, i.e., this is the kind of thing the Ministry of Love does.
As for my statement about deterrence and retribution, I believe we’re having that discussion here.
A datapoint: I think the purpose of prisons is the institutional expression of anger, and insofar as they do this, they are an expression of respect for the criminal as a moral agent. In fact, I think that the use of prisons as a deterrent or to modify behavior is downright evil: you’re not allowed to put people in a box and not let them out just to change the way they act, and especially not to communicate something to other people.
(For the record, it looks like you may not be a consequentialist, but it seems worth asking.)
Um … why not? I mean, when we all agree it’s a good idea, there are reasonable safeguards in place, we’ve checked it really does reduce rapes, murders, thefts, brutal beatings … why not?
Is it OK to lock someone in a box because you’re angry? Isn’t that, in fact, evil? Does it become OK if you “respect” them (I’m not sure what this refers to, I admit.)
I should probably mention that hen has answered me via PM, and they are, in fact, basing this on consequentialist (more or less) concerns.
I more-or-less agree with your world view, with the caveat that I would interpret contrafactual crime prevention as anger translated into decision theory language (it helps to think about the reason we evolved the emotion of anger). Deterrent as applied to other people is a version of the contrafactual crime prevention where we restrict our thinking to other people in this event branch as opposed to all event branches.
To a VNM consequentialist in every situation there is a unique “best action”, my contrast for a deontologist or virtue ethicist their morality doesn’t specify a single action to take. Thus you are allowed (and possibly encouraged) to help the starving man, but aren’t required to.
And? I should hope anyone reading this thread has already figured that out—from all the times it was mentioned.
Is there some sort of implication of this I’m too stupid to see?
That it doesn’t require bullet biting to say that you are not moral obligated to help the starving person.
How so? It’s an unpleasant thing to say, and conflicts with our raw intuition on the matter. It sounds evil. That’s all biting a bullet is.
Remember, it’s sometimes correct to bite bullets.
What do you mean our intuition on the matter? My intuition says that at least it depends on how the man came to be starving.
Well, since Alicorn’s system does not take account of that, this is in any case biting a bullet for you as well.
[With that acknowledged, I am curious about those intuitions of yours. Is this about punishing defection? The standard “well, if they’re a Bad Person, they deserve what’s coming to them”? Or more “it’s their own fault, they made their bed let them lie in it, why should we be responsible for their foolishness”, that sort of thing?]
As you may have noticed, I’m not Alicorn.
Both, also I have some more examples, that could fall under one or both depending on how one defines “defection” and “foolishness”. If someone decided that they’d rather not work and rely on my charity to get food, they won’t be getting my charity. Also if CronoDAS comes by my house begging for food, the answer is no.
Another example, is that my response to the famous train dilemma depends on what the people were doing on the track. If they were say picking up pennies, I’m letting them get run over.
Well … yeah. Because you’re replying to something I said to Alicorn.
Is this for game-theoretic reasons, or more of a virtue-ethics “lazy people don’t deserve food” thing?
Are we killing people for stupidity now? I mean, I guess if the numbers were equal, the group drawn from the general population is a better bet to save than the group selected for “plays on train tracks”—but I don’t think that’s what you meant.
Wait, is this a signalling thing? Y’know, sophisticated despair at the foolish masses? If it is, there’s no need to reply to this part; I’ll drop it.
Did you take click on my link? “Picking up pennies on railroad tracks/in front of a steam roller” is a well known metaphor for taking certain types of risks in economic circles.
However, to answer your question: no, I (normally) won’t kill someone for his stupidity, but I see no reason to save them, and certainly no reason to kill other people to save them.
Yes, I clicked the link.
OK, that’s a little scary (or would be, anyway). Um … why don’t you care about the suffering and death of someone “stupid” (or risk-taking)?
What I find scary is that you appear to be willing to sacrifice innocent bystanders to save stupid people from their own stupidity.
Why should I?
If they chose to take that kind of risk, they are responsible for its consequences.
Would you prefer that others care about your suffering and death, if something happened such that you became (temporarily or permanently) “stupid”?
In many cases, people are not aware of the risks they are taking; in many other cases, people may not have less-risky alternatives. Should they still be entirely responsible for their consequences? Because that seems to lean towards “just-world hypothesis” thinking, and if that’s where this is going, we may want to just go there and be done with it.
Would you like to be the innocent bystander sacrificed to save an idiot from the consequences of his own stupidity.
Me in particular, or people in general? Because there is a particular class of idiot that most people would GLADLY be sacrificed to save; they’re called “children”.
As for me, personally, that depends on the calculus. Am I saving one idiot, or ten? Are they merely idiotic in this circumstance, or idiotic in general (i.e., in most situations a normal human being might reasonably find themselves in)? Are we talking about a well-medicated version of me with a good chance of contributing meaningfully to society, or a cynical, hopeless, clinically depressed version of me that would gladly take ANY reason to die? Because I think at this point, we’re talking about weighted values, and I quite imagine that there’s a certain number of certain kinds of idiots that I would absolutely consider more worth saving than certain versions of myself, if I was doing the calculus honestly.
And if I’m not doing the calculus honestly, then I’m an idiot.
I think that here, “idiot” refers to idiocy for which the person is to blame. Children are not generally to blame for being idiots.
Can you describe the mechanism by which children are not to blame for their stupidity, but other beings are?
People do not choose to be children. People do choose to be careless or to refuse to learn. Idiocy that is caused by carelessness or refusal to learn is therefore the person’s fault.
In the unlikely case of someone who has, for instance, been infected by nanobots that force his brain to act carelessly, I would of course not hold him to blame.
As opposed to, say, just a reduced capacity for impulse control or learning? Or an ingrained aversion to thinking before acting?
EDIT: Heh. Actually… It looks like your specific example is more plausible than I thought.
Put more bluntly: are there some classes of people which are less a product of their environments and biologies than others?
(And I’m not merely saying this from the perspective of “why do you hold idiots accountable”; I’m also asking “why do children get a free pass?”)
I don’t give children a free pass. If an adult is sufficiently incompetent, I wouldn’t blame him, either.
However, I would not classify an adult as sufficiently incompetent for these purposes unless his impulse control is so bad that he can’t safely live on his own. (There is no inconsistency between this and considering children incompetent, since children cannot safely live on their own either.)
In the example given, I think if people are incompetent enough to risk themselves physical injury or death for the sake of picking up pennies, that’s pretty good evidence that they can’t safely live on their own without supervision.
If they managed to survive long enough to get to the railroad track to pick up the pennies, they’re probably able to live on their own without supervision unless there was an extreme stroke of luck involved (such as having been released from custody fifteen minutes ago).
They don’t quite choose to live in places with lots of lead, more omega-6 than omega-3 fats, and little lithium either, for that matter.
Would the government not be violating rights (rights-not-in-the-legal-sense) if it forces people to feed those whom they don’t want to feed?
My feelings about governments are complicated by the guarantee-of-exit thing I mentioned elsethread, but with that understood, I’m not opposed on any level to systematic taxation. If a government were rounding people up to work in agriculture or soup kitchens or what have you against their will, that would be wrong.
In the absence of guarantee-of-exit (and guarantee-of-entrance-into-something else?), is taxation a violation of people’s rights? If not, why not?
No; there doesn’t have to be a society that wants you, or for that matter one that is agreeable to your preferences.
I don’t think so. I think failing to provide guarantee-of-exit is a failing on the part of various governments and it does make some things they do less defensible, but I’m not opposed to taxes. Part of it is actually that it’s not a person collecting taxes.
I’m confused. It’s not a person collecting taxes? Are tax collectors, cops (if it comes to force), etc, not people?
I’m pretty sure the overwhelming majority of taxes are not collected in the tax-collector-based way depicted in the Disney version of “Robin Hood”. I do object when force comes to be involved. (I don’t have any suggestions on what to do about it. Something being wrong doesn’t, actually, stop anyone from doing it.)
They’re not collected in the tax-collector-based way because there’s no need to—there’s enough of a threat of force to get people to comply. If it’s a credible threat, the government would use force on non-compliers, presumably thus violating their rights. As you said, something being wrong doesn’t stop anyone from doing it—but it does license you to say that they shouldn’t do it, and it licenses the victims to resist.
Okay. Elsewhere in thread when I was walking through the photography example, I said that if there were a right to not be photographed but it were generally known that the customs were different for public figures, becoming a public figure on purpose might constitute consent. This is why I think guaranteed exit is so important—if it were in place, you could move to whatever country had the taxation setup you could best tolerate if they’d have you, and that would be that.
Even without guaranteed exit, countries can have a price of admission, though. (Sort of like even if there is no universal healthcare, your doctor can charge, and even if there is no food bank, so can the grocery store.)
I really doubt that anyone is waiting for me to license them to tax dodge or pick fights with cops.
This assumes that staying implies consent, which is a questionable assumption. It presupposes that the State has the right to do whatever it wants on its territory as long as it lets people leave (even if the only other state in the world is even more authoritarian). For example, if half of the world were ruled by North Korea and the other half by China, would you say that China’s policies were morally justified because people would be free to leave and move to North Korea?
No, but they may like morality to license them to avoid taxes or resist cops. (Although I do like the image of someone thinking, “Damn, taxes suck, if only that person who wrote that Twilight fanific said I don’t have to pay them.”)
No kidding it’s questionable, hence my thing about guaranteed exit. But likewise the various agents of the government do not necessarily consent to freeloading. If the Red Cross puts out juice and cookies for blood donors, and you are not a donor, and you take some, you are stealing even if there is nowhere else for you to get food.
No, it does not imply that. They can’t do things suddenly, in particular (because then that particular aspect of the interaction hasn’t been consented to). Consent is also revocable at any time even if standing permission is granted. They also have to stick to contextual relevance in attempting to enforce laws. (Also, a government that was operating under Alicorn Morality couldn’t lie, which I think all by itself would shake some things up.)
I am unqualified to have an opinion on the details of the political situations in most countries. If I just read this as “Bad Country With Guarantee of Exit” and “Worse Country With Guarantee of Exit”, well, that sounds like a pretty lousy situation to be in, but nothing about this situation means the countries involved have to “charge less” or require different standards of behavior from their citizens.
Imagine that the world is divided between Fascistia and Communistan. One day, the Duce of Fascistia announces that in a year, all the wealth of the residents of Fascistia will be confiscated to build statues of Mussolini, but before then, they’re perfectly free to take their stuff and move to Communistan. The General Secretary of the Communist Party of Communistan announces that he’ll happily accept all new immigrants, but warns that in a year, all the wealth of residents of Communistan will be confiscated to build statues of Lenin. In this case, the change is not sudden (if you consider this sudden, change “in a year” to “in ten years”) and it doesn’t prevent either country’s residents from leaving. Is this a rights violation?
Or consider another scenario. One day you’re checking your mail and find a letter from famed thief Arsene Lupin, informing you that in a year he will be breaking into your house to steal a recent painting you’ve acquired. M. Lupin happens to read LessWrong from time to time, so he’s read your writings on morality. He writes that you are free to leave your house and take your possessions with you, thwarting him. Nevertheless, you don’t leave. In doing so, have you consented to the painting being stolen?
I am entertained by your examples.
Assuming the residents of Fascistia and Communistan have no wherewithal to create separate states (including by declaring subregions independent and declining to accept further services from the parent countries, thereby ending the transactional relationship; forming seasteads; flying to the Moon; etc.) it sure looks like they are in a pickle, unless they manage to use this year to become sculptor suppliers, or attempt to convince the leaders in question to change their minds. This is sort of like my version of the utility monster—sure, in real life, there are large numbers of diverse people and institutions you could choose to interact with, but what if your choices were Bad and Also Bad?! - and I have to bite the bullet here. (I do think it’s probably hard to construct a situation where nobody is, for whatever reason, capable of declaring independence, but if you cut off that route...)
I don’t consent to interact with M. Lupin or allow him into my house on any level. We are not in a transactional relationship of some kind that would imply this.
This seems a strange place to bite the bullet. Why can the state seize property (with ample warning) but M. Lupin can’t? The state is made of people, and if no person is permitted to seize it, then the state isn’t either. Alternatively, if the state is permitted to seize it, then some person must be as well, so it seems that people would then be allowed to make demands that entitle them to to your stuff.
Why is this different from the state? Is it because it provides services? Would this be any different if M. Lupin broke into your house every day to do your laundry, without your consent, and then claimed that he had a right to the painting as payment for his services?
The services thing is key, but so is consent (of some kind, with guaranteed exit, etc etc caveat caveat). I don’t consent to M. Lupin coming into my house even to do my laundry, you can’t throw a book through somebody’s open window and demand ten dollars for it, if I make a batch of cookies I cannot charge my neighbors for the smell. If the people of Communistan declare independence of Provinceland, and Communistan officials commence visiting Provinceland without permission continuing to maintain the roads even if Provincelanders wish they would go away, then Communistan is conducting a (bizarre) invasion, not a consensual transaction.
How many people does it take to secede? Would it be permissible for California to secede from the US? What about the Bay Area—would it be morally permissible for it to be its own country? What about a small suburb? One house? Can I unilaterally secede, then claim that tax collectors/cops are invading my country of Blacktransylvania?
I don’t have a minimum number in mind, although you’d certainly need a fair number for this to be advisable. I will bemusedly support your solo efforts at secession if that is meaningful to you, provided that the land you’re trying to secede with belongs to you or someone amenable to the project.
Thank you for explaining your position. It’s surprisingly radical, if your last sentence is to be taken literally. I have one last question. Assume a few of my neighbors and I secede, and say that tax collectors are unwelcome. May we then amend our permission to say that tax collectors are welcome, but only if they’re collecting up to X amount of taxes, where X is the amount needed to fund [list of US government services we support], in return for receiving those services?
I don’t see why not, but the United States is not obliged to offer the services a la carte.
What do you mean “comes from”? The rule in question fails to exist; it doesn’t have to come from anywhere, it just has to not be. Do you think that it does be?
Someone photographing you has a different intention from someone murdering you. (If the photographer believed that taking a picture of you would, say, steal your soul, then I would hold them responsible for this bad behavior even though they are factually mistaken.)
I don’t think literally all rights are negative. Positive rights are generally acquired when someone makes you a promise, or brings you into existence on purpose. (I think children have a lot of claim on their parents barring unusual circumstances.). But nothing has happened to create a positive obligation between you and a random photographer.
I don’t actually know any jurisdiction’s laws about publishing nonconsensual photographs. What I’d be looking for would probably depend on what I could reasonably expect to succeed at getting. This entire endeavor has left the moral sphere as long as I don’t violate the photographer or anyone else’s rights. My goal would probably be to discourage non-consensual photography of me and in general, as it’s kind of a dick move, and to compensate myself (perhaps with money, since it’s nicely fungible) for the unpleasantness of having been nonconsensually photographed. If I do not, in so doing, violate any rights, I can seek whatever is available, no problem.
This entire thing is actually complicated by the fact that I think political entities should guarantee an opportunity of exit—that if you can’t live with your society’s set of rules you should be shown out to any other place that will have you. Without that, there’s definitely some tension on my moral system where it interacts with the law. If we had proper guarantee of exit, the photographer being around me at all constitutes agreement to live by applicable shared rules, which in this hypothetical include not nonconsenusally photographing each other (I don’t know if that’s a real rule, but supposing it is) and also not breaking into each other’s houses and also producing fines when legally obliged to do so. In the absence of guarantee of exit it’s complicated and annoying. This also gets really stupid around intellectual property laws, actually, if you really want to put the squeeze on my system. I’m just gonna say here that any system will stop working as nicely when people aren’t cooperating with it.
I don’t think I’d characterize burglary as “attack”, but I already listed “stealing” separately in that shortlist of things.
I… am not sure what that paragraph means at all. In more detail, my question is twofold:
What are deontological rules in general, and rights in particular ? Are they, for example, laws of nature such as gravity or electromagnetism; are they heuristics (and if so, heuristics for what); or are they something else ?
How do we know which deontological rules we should follow in general; and which rights people have specifically ? For example, you mentioned earlier that people do not have a right to not be photographed. How do you know this ?
Once again, how do you know this ?
Fair enough; I was using “attack” in the general sense, meaning “an action whose purpose is to diminish an actor’s well-being in some way”.
That said, I’m not sure I understand your model of how the legal system interacts with morality. At one point, you said that the legal system is ethically neutral; I interpreted this to mean that you see the legal system as a tool, similar to a knife or a lockpick. Thus, when you said that you’d wield the legal system as a weapon against the photographer (or, more specifically, his money), I questioned the difference between doing that and wielding a lockpick to accomplish the same end. But now I’m beginning to suspect that my assumption was wrong, and that you see the legal system differently from a tool—is that right ?
Depending on how you define “purpose”, burglary still might not qualify. The purpose of a burglary isn’t to harm its victims, it’s to acquire their valuables; harm is a side effect.
Good point; in this case, the fact that the victims lose said valuables is merely a side effect of how physical reality works.
Perhaps a better definition would be something like, “an action at least one of whose unavoidable and easily predictable effects includes the diminishing of another actor’s well-being”.
Rights are a characteristic of personhood. Personhood emerges out of general intelligence and maybe other factors that I don’t fully understand. Rights are that which it is wrong to violate; they are neither laws of physics nor heuristic approximations of anything. They are their own thing. I do think they are necessary-given-personhood.
Can you tell me where I lost you in the detailed description of what my process to determine that people don’t have that right was? I wrote down the whole thing as best I could.
Promises are the sort of thing that generates positive rights because that’s what “promise” means. If it doesn’t do that, it’s something other than a promise. (At least formally. You could have other definitions for the same word. The particular sense in which I use “promise” is this thing, though.)
I think if I were you I’d be really careful with my paraphrasing. I’m not going to object to this one in particular, but it brought me up short.
The legal system is many things; it definitely works as a tool for problems like collective action, coordination problems, deterrence of disrupting the social order, and more. I’m not sure what you’re reading into the word “tool” so I’m not sure whether I want to claim to see it exclusively as a tool or not.
I want to ask “why”, because I don’t fully understand this answer, but I fear that I must ask the more difficult question first: what do you mean by “personhood” ? I know it can be a tricky question, but I don’t think I’ll be able to figure out your position, otherwise. However, this next line gave me pause, as well:
Since I am not a deontologist (as far as I know, at least) I read this as saying: “rights are sets of rules that describe actions which any person (pending Alicorn’s definition of personhood) must avoid at all costs”. Is that what “wrong to violate” means ?
I’m having trouble with the “process” part. From my perspective, whenever I ask you, “how do you know whether a person has the right X”, you either list a bunch of additional rights that would be violated if people didn’t have the right X; or derive right X from other rights, whose origin I don’t fully understand, either. Clearly I’m missing something, but I’m not sure what it is.
I do acknowledge that your system of rights makes a sort of sense; but the only way I know of interpreting this system is to look at it and ask, “will these rules, if implemented, result in a world that is better, or at least as good as, the world we live in now ?” That is, from my perspective, the rules are instrumental but not terminal values. As far as I understand, deontologists treat rights as terminal values—is that correct ?
I did not want to make it sound like I’m putting words in your mouth. Whenever I say something like, “you, Alicorn, believe X”; I only mean something like, “to the best of my understanding, which may be incorrect or incomplete, Alicorn believes X, please correct me if this is not so”.
By “tool”, I mean something like, “a non-sapient entity which a sapient agent may use in order to more easily accomplish a limited set of tasks”. For example, a hammer is a tool for driving nails into wood (or other materials). The “grep” command is a tool for searching text files. The civil legal system could be seen as a tool for extracting damages from parties who wronged you in some way.
I believe (and you might disagree) that most tools (arguably, all tools, though weapons are a borderline case) are morally neutral. A hammer is neither good nor evil; it’s just a hammer. I can use it to build shelter for a homeless man, thus performing a good act; or I could use it to smash that man’s skull, thus performing an evil act; but it is the act (and possibly the person performing it) who is good or evil, not the hammer.
I don’t have a really thorough account of personhood. It includes but is not limited to paradigmatic adult humans.
I definitely wouldn’t have chosen that phrasing, but it doesn’t seem obviously wrong?
I’m not sure where you want me to ground this. Where do you ground your morality?
I wouldn’t choose the word “value”, but they definitely are non-instrumental in nature.
I will tentatively classify the legal system as a tool in this sense, albeit a tool for doing some very abstract things like “solve coordination problems”.
So how do you know that rights “naturally fall out of” personhood, if you don’t really know what personhood even is ?
Ok, so in this case my problem is with the prescriptive nature of rights. What does “must avoid” mean in this case ? I personally can think of only three (well, maybe 2.5) reasons why an action must be executed or avoided:
The action will lead to some highly undesirable consequences. For example, jumping off of very high places must be avoided at all costs, because doing so will result in your death.
The preference or aversion to the action is hardwired into the person (via genetics, in case of humans). For example, most humans—even newborn ones—will instinctively attempt to stay away from ledges.
The action is part of the laws of nature that act upon all physical objects. For example, humans on Earth can’t help but fall down, should they find themselves in mid-air with no support. The same is true of rocks.
I’m not sure, but I don’t think any of these points adequately describe deontological rules. Point #1 is conditional: if your death becomes highly desirable, you may find jumping off a cliff to be a reasonable action to take. Points #2 and #3 are more descriptive than prescriptive. Regarding #2, yes we are wired to avoid ledges, but we are also wired to desire fatty foods, and in the modern world some of us must fight that compulsion every day or face highly undesirable consequences. Point #3, of course, is entirely descriptive; yes, objects fall down, but what you do with that knowledge is up to you.
Note also that there is a clear strategy for learning about reasons #1, 2, and 3: we look at the evidence and attempt to adjust our belief based on it. Again, I don’t understand how we can learn about deontological rules at all.
I have some sort of a utility function which is hardwired into my personality. Lacking perfect introspection, I can’t determine what it is exactly, but based on available evidence I’m reasonably sure that it includes things like “seek pleasure, avoid pain” and “increase the pleasure and reduce the pain of other people in your tribe”. Based on this, I can evaluate the fitness of each action and act (or choose not to act) to maximize fitness.
Obviously, in practice, I don’t apply this reasoning explicitly to every action; just like you don’t apply the full Bayesian reasoning machinery to every rustling noise that you hear from the bushes. It would take too long, and by the time you figure out P(tiger | rustling), you’d be tiger-food. Still, that’s merely an optimization strategy, which is reducible to the underlying reasoning.
I’m starting to get concerned that you have some intractable requirements for completeness of a philosophical theory before one can say anything about it at all. Do you think your ethics would withstand a concerted hammering like this? Do you know how to compare utility between agents? What are your feelings on population ethics? How do you deal with logical uncertainty and Pascal’s muggings in complex Omega-related thought experiments? I’m not planning to make you solve these peripheral problems before allowing you to say that you endorse actions that have the best consequences over their alternatives (or whatever framing you prefer).
It means “if you don’t avoid it, you will be doing something wrong”. That’s all. Your guesses are wrong. Did you read Deontology for Consequentialists?
I wasn’t trying to Gish Gallop you if that’s what you’re implying. That said, I think you are underestimating the inferential distance here. When you say, “rights naturally fall out of personhood”, I literally have no idea what that means. As you saw from my previous comments, I tried to stay away from defining personhood as long as possible, but I’m not sure I can continue to do that if your only answer to “what are rights” is something like “an integral part of personhood”.
Pretty much the only possible ways I can translate the word “wrong” are a). “will lead to highly undesirable consequences”, and b). “is physically impossible”. You ask,
Yes I did, and I failed to fully understand it, as well. As I said before, I agree with most (or possibly all) of the rights you listed in your comments, as well as in your article; I just don’t understand what process you used to come up with those rights. For example, I agree with you that “killing people is wrong” is a good rule; what I don’t understand is why you think so, or why you think that “photographing people without permission is wrong” is not a good rule. Your article, as far as I can tell, does not address this.
Treat “acting in a way that violates a right” as “undesirable consequences”—that is, negative utility—and everything else as neutral or positive utility (but not positive enough to outweigh rights violations).
“Wrong” here is, essentially, “carrying negative utility”—not instrumentally, terminally.
Disclaimer: I am not a deontologist, and I’m certainly not Alicorn.
Well, I’m out of ideas for bridging the gap. Sorry.
Fair enough; I appreciate the effort nonetheless.
Ok, now how would you translate “undesirable”?
Where does your utility function come from?
These are good questions. It seems like deontologists have difficulty reconciling seeming conflicting rights.
In my main reply to the original post, I discuss some of the conflicts between truthfulness and privacy. If people have a right to not be lied to, and people also have privacy rights, then these rights could clash in some situations.
Why these rights, and not others? For example, why a right to not be murdered, instead of a right to murder one person per year? A once-a-year right to murder can be formulated as a negative right, i.e. non-interference as you murder one person.
(I agree with the listed criteria for rights, BTW.)
Quantity limitations on rights are inelegant (what does one Earth year have to do with personhood-in-general?), so there’s that. Even if you frame it as “the right to go uninterfered-with during the course of one murder per year”, that has a heck of a lot of fiddly bits.
It also doesn’t interact with itself very well. Suppose you are trying to murder me, and I’m the first person you’ve tried to murder all year, and I haven’t murdered anyone all year either, so I try to murder you back—am I interfering with you? It sure looks like it, but I’m just exercising my own right… “Right not to be murdered” doesn’t do that sort of self-defeating.
I have other reasons to prefer the “right not to be murdered” version but they are failing to come verbally clear. Something about self-containedness that I’m having trouble explicating.
Consequentialist reasoning which seems to align fairly well with Alicorn’s conclusions (at least the one about it being in some situations correct to hide the truth by being selective even when this in some sense deceives the listener, and at the same time being less correct to directly lie) are touched on here if that’s useful to you.
Essentially: You don’t know for sure if a person wants general encouragement/niceties or a genuine critique. One way to deal with this is to say something nice+encouraging+true which leaves room for you to switch to “okay but here is what you could do better” mode without contradicting your previous nicety if and only if they communicate clearly they want your full opinion after hearing your careful wording.
I find the reaction to this comment, both in the downvotes and some of the responses, interesting in light of the recent discussion about Tell Culture. That post was highly upvoted, but some people in the comments expressed the opinion that even the people who claim to endorse Tell culture really don’t, and that people who actually consistently operated on Tell Culture would end up getting punished, even in a community where most people claimed to endorse Tell.
As far as I can tell, the reactions to this comment are support for that hypothesis, as I see you as a person who consistently operates on Tell, and then (as in this case) occasionally gets censured for that, even in a community where a lot of people previously claimed that Tell sounds awesome.
I think you have it backwards. Chris Told, Alicorn punished him for it, and the community retaliated. This is a great victory for Tell culture and radical honesty, as long as you don’t believe Alicorn embodies them.
A key difference is that the community is incrementalist and consequentialist, while Alicorn is absolutist and deontologist. A lot of the comments don’t believe that Alicorn accurately identifies liars. Expelling him is a step backwards from her claimed goal of honest associates. And, indeed, she did specify it was instrumental to this goal and not just a rule she follows without regards to consequences. But it’s probably also that. The community’s failure to grasp the deontological aspects may make its reaction unfair; but I cannot judge for the same reason. The basic reaction is that she is a very strong instance of Guess culture, where her associates have to guess how much to lie to her and are strongly discouraged from talking about it.
I don’t think that follows. The fact that we punish people for telling others about X, and we don’t punish them if we don’t, doesn’t mean we’re punishing them for telling; it means we’re punishing them for X. We’d really like to punish them for X whether they tell or not, it’s just that telling makes it easier.
It may be more understandable to think about it as cheating. You can either lose, or cheat and win. If you lose, you suffer all the effects of a loss. If you cheat, you may not suffer at all. But we don’t describe that as “punishment for not cheating”. It’s the same here: you can lose (have your opinions judged poorly) or cheat (conceal your opinions by not telling anyone, and escape being judged for them).
Retracted comment