Wild Moral Dilemmas
[CW: This post talks about personal experience of moral dilemmas. I can see how some people might be distressed by thinking about this.]
Have you ever had to decide between pushing a fat person onto some train tracks or letting five other people get hit by a train? Maybe you have a more exciting commute than I do, but for me it’s just never come up.
In spite of this, I’m unusually prepared for a trolley problem, in a way I’m not prepared for, say, being offered a high-paying job at an unquantifiably-evil company. Similarly, if a friend asked me to lie to another friend about something important to them, I probably wouldn’t carry out a utilitarian cost-benefit analysis. It seems that I’m happy to adopt consequentialist policy, but when it comes to personal quandaries where I have to decide for myself, I start asking myself about what sort of person this decision makes me. What’s more, I’m not sure this is necessarily a bad heuristic in a social context.
It’s also noteworthy (to me, at least) that I rarely experience moral dilemmas. They just don’t happen all that often. I like to think I have a reasonably coherent moral framework, but do I really need one? Do I just lead a very morally-inert life? Or have abstruse thought experiments in moral philosophy equipped me with broader principles under which would-be moral dilemmas are resolved before they reach my conscious deliberation?
To make sure I’m not giving too much weight to my own experiences, I thought I’d put a few questions to a wider audience:
- What kind of moral dilemmas do you actually encounter?
- Do you have any thoughts on how much moral judgement you have to exercise in your daily life? Do you think this is a typical amount?
- Do you have any examples of pedestrian moral dilemmas to which you’ve applied abstract moral reasoning? How did that work out?
- Do you have any examples of personal moral dilemmas on a Trolley Problem scale that nonetheless happened?
The Username/password anonymous account is, as always, available.
I had to decide whether I would send my sister to prison for a year or let her keep using IV drugs. I chose to send her to prison, but this was not the intuitive choice. I very much performed a utilitarian calculation. This leads me to remark on socioeconomic class: My station has certainly improved since childhood, but I would still say that I’m very much working class, and I dare say that the reliability of one’s moral and memetic heuristics and inputs are very dependent on class.
In my personal experience, though I take a risk in fully generalizing, the working class is permeated with toxic memes. The most common and general is probably anti-intellectualism, but there are other more specific ones that are better communicated in phrase: “It is better to be thrilled than it is to be safe”; “It is more important to conform to working-class social norms than to obey the law”; “Physical, verbal, and emotional abuse are tolerable so long as the abuser loves me”; “Physical exercise and healthy diet merely confer bonus points”; “Regrettable actions committed on emotional impulse are entirely excusable, even with this maxim in mind”; and perhaps most ironically, “One should follow one’s heart,” without the caveat that one should not follow it over the edge of a suspension bridge.
This is not to say that the other classes are entirely nontoxic, but I would say that they are less toxic. You can see in the other classes, being safety-conscious, physically exercising and eating healthy food, not tolerating abuse, and at the very least making the appearance of deliberation, are acts that actually confer social status. When I spend time around people in a higher socioeconomic class it seems that they on average have healthier thoughts than me, if we’re talking about gut reactions and intuitions, as we are, even if they have not deliberately sought out and acquired their memes. In one sense, we would expect them to seem healthier, and in another more objective sense, we would also expect them to seem healthier, because socioeconomic class, mental and physical health, and all of those other enumerable things correlate with one another; it is social and so it is a causal shooting gallery, but the correlation is there.
And likewise, LessWrong is skewed heavily towards white, male, very well-educated first-worlders. We might expect that an average LW user simply relying on the memes that they’ve acquired and not applying a moral calculus at all would not be a terribly worse alternative to applying the calculus, or perhaps an even better alternative, if they would apply the calculus selectively and in the pursuit of justification.
And so in my everyday life I find that I am surrounded by people with unhealthy memes and that I myself have some curled up in the various corners of my mind, and it is, more often than one might think, safer and very useful to consciously deliberate as opposed to following intuition. Virtue ethicists who consider virtuous danger, thrill-seeking, impulse and anti-intellectualism, do not live very long on average.
And furthermore, though I am technically twisting your words to my own end, I do not think that it is such a crazy hypothesis to say that higher classes lead more ‘morally inert’ lives, because many healthier memes allow you to ‘skip’ the moral dilemmas altogether; e.g. contraceptive use, abiding the law, taking care of your health, surrounding yourself with people who do all of these things and have all of these healthy memes, etc.
But of course, neither am I a human utility calculator.
An interesting comment. To what extent, do you think, the memes you’ve mentioned apply mostly to young people, in particular young males? I have the impression that the older generation suffers much less from the “Hold mah beer and watch this” syndrome. This may be because they’re just older (which means both that they managed not to kill themselves and that their biochemistry makes them less aggressive and rash), or this may be because it’s just a different generation which grew up in different conditions.
I would say that, considering that much of what I’ve mentioned has to do with a lack of risk aversion, it would be skewed at least somewhat towards young people. But simultaneously and counterintuitively, I would say that it applies to young women more than one would initially suspect; my just-so story for this is that greater society-wide gender equality manifests in the minds of many working-class young women as “Do what the boys are doing because I can now,” which amounts to pronking. I’ve noticed that my sister in the past has done dangerous things for the sake of social status. But I also think that all of my words should be taken in context, because I am myself only one relatively uneducated, working-class, young male, which holistically is simultaneously a source of authority and bias.
But of course, not everything that I’ve said has to do with a lack of risk aversion, so if we were to dissolve this slightly and examine some of the individual memes that I’ve discussed, some may apply to older people as well, such as a greater tolerance for abuse, heart-following, and of course anti-intellectualism. Also, I do have some rural relatives who suffer from the aforementioned syndrome despite their age.
I found this response very insightful. It ties in with a variety of other things I’ve been thinking about recently, and has given me a great deal of food for thought. Thank you for sharing it, and you have my sympathies regarding your sister.
Thank you as well; I didn’t mention it because the decision rather than the ultimate outcome was the relevant part of this discussion, but she ended up with a deal in which she would receive six months in jail and live at a dual-diagnosis (she has generalized anxiety disorder) halfway house for some time after that, so the outcome has been quite positive compared to alternatives.
Interesting post!
I just noticed that Robin Hanson posted something somewhat relevant to this topic this morning.
Yes, it’s relevant, though I have to confess I don’t understand his point. As far as I can see all he is saying is that preferences and attitudes (“tastes”) matter for the outcomes which is trivially true and doesn’t seem to be insightful. Parents have been trying to instill “proper values” in their children since times immemorial.
Many of the world’s greatest moral improvements rested precisely on using some material means to transform some choice people just aren’t very good at making from Highly Morally Significant to Mostly Morally Inert. Contraception and universal education are probably the easiest examples here: we all know that people are going to have irresponsible sex and that most people aren’t very intellectual. Making otherwise irresponsible sex and otherwise irresponsible anti-intellectualism increasingly harmless has done way, way, waaaaaaay more for overall well-being than literally centuries of haranguing people to become more chaste and learned.
I have found that the working class is living as sane or saner in Eastern Europe as the middle class. I think the difference is that you are used to a working class that 1) feels less and less needed, there are less and less simple jobs 2) similarly they cannot utilize their skills outside their jobs, they don’t live in villages, cannot build stuff for themselves 3) welfare society protects them from some of the consequences. The people I talk about feel the opposite, they are still needed for jobs as they are cheaper paid than machines, they often live in villages and spend the weekends building each other garages, and they receive hardly any welfare net, they often have to bribe doctors to get any semblance of serious medical care.
So it is seems it is the toxic combination of being superfluous / aimless / futureless and being protected by an actually well finance welfare state that is killing the Western working class.
Anti-intellectualism may or may not be a bad thing depending on the type.
I don’t know, esr seems to be stretching the point here. His two “good” types of anti-intellectualism, Hayek and Sowell, I would probably call internecine warfare. Both his examples were intellectuals and I doubt they would object to more intellectuals like themselves.
One handy definition of intellectuals is that people who expect their opinions taken seriously in field X based on prestige built in an unrelated field Y. A classic example is Einstein writing about socialism based on the prestige he acquired in physics. More general example is writers, people-of-letters, literature and poetry folks engaging in politics. If we would accept it, Hayek and Sowell were not intellectuals, they never wandered too far from the field they actually had expertise in.
But why accept such a quirky definition? They logic behind is: when you are, say, an economist, and pontificate about economics, you are acting actually as an economist. When you are a physicist or writer and pontificate about politics or economics, you are obviously not acting as a writer or physicist but as a Generic Smart Person. Being a good writer or physicist proves you are smart (roughly: true enough), and you expect people to accept your opinion because you are smart. The unspoken assumption is that smartness matters more than expertise in forming correct opinions. Thus people who expect people to accept their opinions about economics because of their expertise are called economists, and people who expect people who accept their opinions about economics (or anything) because they are smart are called intellectuals: people whose defining (social) feature is the intellect, not the expertise.
On a more broader view, ideally, people should expect their opinions to be accepted because they are actually well evidenced and argued, not based on authority. But the “masses” tend to accept views based on authority. So the expert uses the authority of expertise and the intellectual uses the authority of generic smartness (which is proven by success in an unrelated field.)
Completely off-topic, but do you have a policy for when you emphasise with italics and when you emphasise with bold?
A very vague one. Bold is a bit stronger than italics, plus italics are overloaded, they are used to signify other things than emphasis, too. In the grandparent post there are both italics and bold because the emphasis is somewhat different so I wanted two different ways to emphasize.
He is talking about how the phrase “anti-intellectualism” is actually used in practice.
Don’t think I’ve seen it used in practice much and those times it was clearly derogatory.
In particular, it’s used in a way that intentionally conflates the various meanings.
Should I tell the truth and weaken social bonds or keep silent and maintain social bonds?
I cinsider the importance to me of a truth or a bond then I make my choice. Outcomes vary.
What kind of truths do you mean?
Politics/religion? “How are you doing?” Secrets?
How does this count as a moral decision?
The moral choice is indicated by a question mark in the sentence prior to the one you quote. The sentence you quote is my resolution process. The final sentence is the outcome.
I still don’t see why this is supposed to be moral reasoning. It’s just about the importance of things to you. To me it looks like just as much of a moral decision as your decision to have toast for breakfast or not.
It shouldn’t be the “importance to me”, but the importance to everyone and everything. On top of that, dilemmas tend to be about what we have a bias in. The calculus of virtue is a real danger, and unwise. We shouldn’t do it, but we do it anyway. Remember, the bright are the most likely to be biased.
The compromise is to at least ask opinion to another person.
“The young man knows the rules, but the old man knows the exceptions.”—Oliver Wendell Holmes
Food: I would like to stop contributing to animal suffering, but I also like the taste of a good meal, and I want to have a balanced diet. I do not want to spend much time studying diet, because that topic is boring as hell to me, and I believe that if I eat a random non-vegetarian diet, it will be closer to a balanced diet than a random vegan diet. Also, it is convenient to have a lunch near my workplace, with my colleagues, and there are not many vegan options there.
My solution here is to take the most vegetarian-ish choice from the conveniently available options. If I were single, I would eat joylent for breakfast and dinner, but living with other people, I again try to eat the most vegetarian-ish choice given, being open but not obnoxious about my preferences.
Work: When I was a libertarian, I was proud for working in a private sector, not working for the state. But then I realized that many private companies I worked for also did some projects for government. So I wasn’t sure, if there really is a meaningful difference between working for the state directly, or using an intermediary.
Later, when I wanted to be a teacher, there was a choice between private and public school. Unfortunately, in my country, the public schools are the good ones, and the private ones are a “pay for good grades and a diploma without any learning” system. (That’s because in my country employers care about you having a diploma, but don’t care which university gave it to you. Of course in such environment diploma mills are very popular among students.) I tried a private school anyway, because they convinced me their school was different, but it actually wasn’t, and when I saw my colleagues were blackmailed into giving good grades, I quit. And then I taught in a public school, which was better; until I ran out of money, so I returned to programming.
Maybe a half of IT business in my country means doing something for government (state or local), which often is just a pretext for stealing money from taxpayers. (When your business strategy is being friends with influential politicians and selling overpriced products for the government, it is better to sell a product such as an information system, where the average voter has no idea about what the market price should be.) So, when I have a suspicion that my employer is doing exactly this, is it my moral duty to quit? Also, what difference would it make? If they received the deal because of political connections, they will receive it anyway, and for the same price, so the only difference is what quality the taxpayers will finally receive. If I contribute to make such project better, am I doing a good thing (providing better quality to the taxpayers) or a bad thing (helping to excuse a theft)?
The work in IT itself has some dilemmas: given a choice between two possible solutions, should I as an expert recommend the one better for my employer, or the one better for me (e.g. where I can learn new things that will later increase my value on the job market, even if it means that this specific project will take a little more time and be a little more expensive)? It is very easy to rationalize here a lot; I may convince myself that using a more sophisticated technology is “better in long term” for my employer.
How hard should I work, and how much time should I spend reading the web? Especially when everyone spends a part of their working time browsing online news. Or perhaps could I use those parts of my working time more meaningfully, such as learning something new? Is it an excuse if I will later use some of such gained skills for my employer’s benefit?
In most situations I choose some kind of middle way: not doing obviously immoral things, but also not going an extra mile to be perfect. -- However, this is probably how almost everyone could describe their choices, because usually there is at least one less moral and one more moral alternative compared with what you did.
I am well aware of how much my moral choices are a result of what people around me are doing. I wouldn’t even call it “peer pressure”, because those people do not really exert any significant pressure on me, and I am weird enough so I wouldn’t care so much about being weird in one more thing. It’s just… I don’t want to inconvenience myself with moral tradeoffs more than people around me do.
It is obvious what this adaptation means. It prevents me from seeming immoral, but it also prevents me from taking morality so seriously that it would give me big disadvantage compared with the rest of my tribe. But that’s an evolutionary description, not a psychological one. For evolution, it means “be only as much moral as necessary, not a bit more”. But for me, psychologically… I want to do the right thing, but I also want to be surrounded by people who do the right thing. When people around me don’t do the right thing, it feels futile when I try to do it, so I gradually give up. What is the difference? If you would give me a choice of living in two otherwise equivalent cities, only one city completely vegan, I would choose the vegan city, even if I knew the other city is available. I just don’t want to have the temptation right in front of my eyes. Similarly, I would rather work in a company where everyone tries their best, than in a company where people choose the easiest way; it’s just difficult to try doing my best when I keep seeing people who choose the easiest way, especially if once in a while their laziness makes my own work harder.
I think you’ve touched on something important here that I also touched on at the end of my comment above; namely, that in practice, it is often more effective to invest resources in taking preemptive steps to avoid moral dilemmas than it is to prepare for, or expect to be satisfied with your behavior in, actual moral dilemmas.
Such as to build some safety mechanism in trolleys? :D
Also, Bible says “do not bring us into temptation” instead of “help us overcome temptation”.
This is, of course, a very narrow comment, but it seems to me that vegan salads based on boiled rice (for bulk) are pretty versatile. Add a cut apple, fresh herbs (dill, young garden marigold, garden nasturtium, parsley, rocket, basil in moderation), oil, spices, perhaps some other fruits if you like them, boiled asparagus, tomatoes… I like it also because the rice can be stored in the fridge, and the rest doesn’t take much time to add. I hope it is of some use to you:)
Virtue ethics is not the worst heuristic.
Definitely. Whether to cheat on a test used to be a common moral dilemma. I ended up making different decisions in different circumstances, based on both virtue ethics and consequentialism, and occasionally deontology.
Often in real life dilemmas the hard part is being honest with oneself rather than doing an accurate utility calculation.
The most important dilemma I encountered is probably my career choice which I got wrong by rationalizing my desire for luxuries and social status. The dilemma is made considerably more difficult by having responsibility to my family rather than only to myself. Essentially the same tradeoff (luxuries vs greater good) comes up in day to day choices as well. Often it is hard to tell whether you really need that extra indulgence to maintain motivation.
Another notable moral dilemma is regarding what is acceptable to eat. I gave up on eating mammals long ago and currently try to stick to a mostly vegan diet, however I’m not quite sure about the right solution. Here some of the difficulty truly arises from philosophical questions like what sort of entities have moral status and how to weight quantity vs. quality of animal lives.
Most of the moral dilemmas I face in real life I’ve never read about in ethics or philosophy classes. Most of my real world experiences are more along the lines of decision theory/prisoner’s dilemmas.
So for example, if someone has wronged me, what does moral philosophy say I should do? I’m not sure because I don’t really know where to look or even if this question has been answered; to my knowledge it’s never been addressed in any philosophy or ethics undergrad courses I took.
But from a prisoner’s dilemma point of view, I have to juggle whether I should cooperate (let it slide) or defect (retaliate). If I let it slide, then I might be sending the signal that I’m a cooperate bot and future agents will think they can take advantage of me. But if I retaliate, then this might descend into an infinite loop of defect bot behavior. And from either of those nodes, I have to take into account the degree to which I cooperate or defect.
None. I’m a virtue ethicist, more or less, of an Objectivist bent. A “dilemma”, to me, is a choice between two equally good things (which virtue I want to emphasize), rather than two equally bad things.
It feels like “None.”
No.
No.
“Trolley Problems” are less about describing genuinely difficult situations, and more about trying to find faults with ethical systems or decision theories by describing edge scenarios. To me, they’re about as applicable as “Imagine there’s an evil alien god who will kill everyone if you’re a utilitarian. What is the most utilitarian thing to do?”
ETA: In fairness, though, I don’t see any ethical issue in the Trolley Problem to begin with, unless you tied all the people to the tracks in the first place. I regard any ethical system as fatally flawed which makes a rich man who walks through a rich neighborhood and is completely ignorant of any misery -more ethical- than a rich man who is aware of but does nothing about misery. Whether or not you qualify as a “good” person shouldn’t be dependent upon your environment, and any ethical system which rewards deliberate ignorance is fatally flawed.
Failing to reward deliberate ignorance has its own problems: all ignorance is “deliberate” in the sense that you could always spend just a bit more time reducing your ignorance. How do you avoid requiring people to spend all their waking moments relieving their ignorance?
“Failing to reward deliberate ignorance” doesn’t equal “Punishing deliberate ignorance.” The issue here is not the ignorance, the issue is in making ignorance a superior moral state to knowledge.
Take ethics out of it: Suppose you were the server admin for the Universe Server Company, where all existing universes are simulated for profit. Suppose that happy universes cost more resources to run than unhappy universes, and cost our imaginary company more money than they make; “lukewarm” universes, which are neither happy nor unhappy, make just as much money as unhappy universes. If the USC were required by law to make any universes it discovered to be less-than-Happy universes Happy, what do you suppose company policy would be about investigating the happiness level of simulated universes?
How do you suppose people who feel obligations to those worse-off than they are cope with this sense of obligation?
The practical effect of this system amounts to punishing ignorance. Someone who remains ignorant takes a risk that he is being unknowingly immoral and therefore will be punished, and he can only alleviate that risk by becoming less ignorant.
In your analogy, we would “fail to reward deliberate ignorance” by requiring the Universe Server Company to make all the universes happy whether they discovered that or not. That would indeed impose an obligation upon them to do nothing but check universes all the time (until they run out of universes, but if the analogy fits, this isn’t possible).
Ah! You’re assuming you have the moral obligation with or without the knowledge.
No, I take the moral obligation away entirely. For the USC, this will generally result in universes systematically becoming lukewarm universes. (Happy universes become downgraded since it saves money, unhappy universes become upgraded since it costs the company nothing, the incentive for the search being fueled by money-saving approaches, and I’m assuming a preference by the searchers for more happiness in the universes all else being equal.)
A law which required universal “Happiness” would just result in USC going bankrupt, and all the universes being turned off, once USC started losing more money than they could make. A law which required all universes -discovered- to be less than Happy to be made into Happy universes just results in company policy prohibiting looking in the first place.
So in your original example, both the rich man aware of misery and the rich man ignorant of it have no moral obligation?
If that’s what you mean, I would describe the old system as “punishing knowledge” rather than “rewarding ignorance” since the baseline under your new system is like lack of knowledge under the old system.
I also suspect not many people would agree with this system.
Correct.
That’s what I attempted to describe it as; my apologies if I wasn’t clear.
We are in agreement here.
Do we define a moral dilemma as something where you are not punished for making the wrong choice? As if you are it is more of a calculation for your own profit.
In my personal life I encounter almost none, since there would be almost always some kind of a punishment, at least people thinking I am an asshole and be less willing to help me in the future and this makes them not a purely moral dilemma.
I have a hunch that moral dilemmas are “meant” to be more political. Like should we allow factory farming of animals.
Also they are for people with more interesting jobs such as docs.
I think the legal system of the first world is pretty much tied down so much that a normal mundane citizen rarely encounters purely moral dilemmas. Usually, if it is dubious it is not allowed.
Therefore, moral dilemmas are handled at law-making, hence at voting. They are political.
For example Climate Change / AGW is a huge moral dilemma for me. I tend to lean towards the skeptics being more right, because the alarmists are talking taking action in the last dramatic minute for 20 years now. The alarmists look a lot like the usual suspects of anti-industrialist hippies. But do I really dare to gamble with this politically? It would be safer to act as if the alarmists are right. My feelings about a bunch of kumbaya hippies are less important than not making the planet almost inhabitable and if there is only 1% chance the whole alarmist case is right, despite their many problems, we should be working more on cutting CO2...
I’m not understanding your argument here.
If the intended conclusion is ”… so they’re probably wrong”, I just don’t get it at all. I mean, I don’t think anyone[1] ever claimed “we have to fix this now or we’ll all be boiled alive in 15 years”.
If the intended conclusion is ”… so they’re probably insincere”, I kinda-sorta get it but it seems wrong. If you think you’ve discovered something that requires urgent action, and the people with the power to take that action keep on not doing it, of course you’re going to keep saying “look, we have to do this, it’s urgent”. No?
[1] Meaning serious climate scientists, of course. The Day After Tomorrow was not a documentary.
That seems like a nice No True Scotsman prologue :-) Do people like James Hansen qualify?
Hansen certainly does. Has he made predictions of the kind I said I didn’t know of anyone making? I skimmed through his Wikipedia page and didn’t find anything so extreme (though he’s said things that are extreme in other ways).
Actually the models in the 1990′s have predicted rather dire consequences for 2015.
At some point it is too late to avoid the disaster and better start preparing for it. That would be my point. Anyone who predicts do X now or disaster will happen in 20 years and then X is not done loses a lot of cred when they still advocate X. They should be more like saying okay the disaster is now unavoidable and better start preparing for it.
Interesting. Examples?
Probably true, though probably the sequence actually goes: disaster avoidable → disaster unavoidable but severity can be mitigated → disaster unavoidable and unmitigable, time to prepare → too late for anything, we’re screwed. And I’d have thought that second phase might be quite prolonged.
Only if X is only worth doing if done immediately. What reason is there to think that’s the situation here?
Imagine the following super-crude model of climate change. In year 0, we discover that from year 50 onwards the temperature is going to rise by 0.2 degrees (Celsius) per year. There is a drastic action we can take to stop this; if we do this in year Y, the warming will stop in year Y+50. In year 100, regardless, the whole thing will magically stabilize at whatever temperature is reached then.
In this model, if we do nothing then from year 100 onwards the temperature is going to be 10 degrees hotter than now, which it’s fair to say will screw a lot of things up very badly. In fact, just doing nothing for 20 years guarantees 4 degrees of temperature rise, which is probably enough to be pretty catastrophic. So the alarmists say: “We must take action within 20 years or it’ll be a disaster!”.
OK, so now it’s 20 years on and no one has done anything yet. We have 4 degrees of temperature rise ahead of us, whatever we do. But the right thing to say isn’t “OK, disaster is unavoidable, let’s just prepare to cope with it” because the magnitude of the disaster is still open. If we take action now in year 20, we only have 4 degrees of temperature rise to cope with. If we give up on stopping the warming and switch to disaster preparation, we have to prepare for 10 degrees of temperature rise, which is much worse.
(And without the cutoff in year 100, if we give up and switch to disaster preparation then the disaster we have to prepare for is the near-certain extinction of the human race within a few centuries.)
For the avoidance of doubt, I am not putting this forward as an accurate account of the actual climate change situation! But it seems to me to have a lot of features in common—possible disaster ahead, considerable lag between action and eventual consequences, taking action sooner means smaller effect. And in my toy model, it seems very clear that a sensible and sincere “alarmist” will both (1) say “disaster ahead if we don’t act really soon” and (2) continue saying that for a long time as no action continues to be taken. Which is exactly what you’re saying they shouldn’t be saying in the real world. What are the relevant differences that make your inference a good one in the real world and not in my toy example?
Whether to cheat on gift aid and whether I should steal money from my parents to fund charitable donations. In my case the fear of being caught and desire to appear moral in front of other people won out over the desire to do the right thing.
When I have to go home during the holidays I have the dilemma of deciding whether it is worse to eat animal products or to argue with my parents. Normally I’d compromise and agree to eat small quantities of milk and eggs and only eat meat in cases where it would be wasted if I don’t eat it. Now Mum often cooks too much meat and tries to persuade me that the leftovers will be wasted if I don’t eat them. If I eat them, she’ll keep using the same trick. If I don’t, she’ll say that I’m being irrational and betraying my principles against wasting food.
When the social norms consider it well within your rights to do so, when should you trust people to make their own decisions for the sake of their own interests vs. when should you “paternalistically” extrapolate their desires and make decisions such as what you think they would want if they were smarter/wiser/disciplined comes about instead” is one that happens to me a surprisingly large number of times.
This often but doesn’t necessarily imply positions of authority. If your good buddy who isn’t very financially savvy is willing to freely give you large sums of money with no obligations attached, do you accept? A strict Mormon who just arrived at college feels peer pressure and impulsively asks you for a drink, and while you do not think it’s immoral you know they’ll feel guilt later-do you give it to them?
More succinctly: My respect for autonomy and my consequentialism conflict in all cases where I think I know what someone wants better than they do and have any measure of power over what happens. Paternalistic attitudes are also very lonely, there are some analogues to “heroic responsibility” here.
My current position is that consequentialism wins, and what feels like moral uncertainty is actually more a “but what if the other person really does know better?” risk which must be calculated. Respect for autonomy is not usually a fundamental value (except for sometimes, we might intrinsically value the choice) but in practice it is a heuristic which usually leads to the best consequences because people are usually best at knowing what they want.
And again a poll:
How much moral judgement you have to exercise in your daily life (consider typical times)? [pollid:960]
Do you think this is a typical amount? [pollid:961]
I am working in a position where I have responsibility for people (see examples below) ? [pollid:962]
To make this more precise I define “to exercise moral judgement” as matching aginst the following cases (feel free to suggest more):
deciding how to treat a person based on their behavior
deciding which thing to buy based on how the purchase affects people (e.g. in other countries)
deciding on actions which affect one or more persons directly or indirectly (e.g. as a parent, doctor, boss or social worker).
In each of these cases only if that decision is neither an impulse nor a cached thought (e.g. you have that way other times before).
I have four sons and it is not unusual that there are fights (verbal or physical) and I have to consider how to deal with that in compilcated ways. Mostly there is uncertainty about the facts and I have to balance the interests of the offender(s) with possibly affected other sons (what kind of role model is that; should that matter,...) and also how that affects me.
How about self/other tradeoffs? In my primary relationship I often consider a decision’s effect on myself and on my partner in more-or-less utilitarian fashion, while also taking into account heuristics around obligations and rights. Though admittedly it’s never a pure moral dilemma in that it’s usually bound up with an akrasia problem and/or a prediction problem.
For example, say I have an extra hour or two to myself to either clean or play Kerbal Space Program. Factors that could go into the decision: -How much I’d rather play KSP than clean -How much happiness I’ll derive from the cleanliness if I clean -How much happiness my partner will derive from the cleanliness if I clean -How much secondhand happiness I’ll derive from the above point -How much of a share of chores I’ve generally been doing
Of course, even if cleaning is the rationally correct decision, that’s no guarantee I’ll do it. Or maybe I’ve been doing more than my share of chores and the best choice would be to tell my partner to do the cleaning instead, but I’m afraid of having that conversation.
And if the utility considerations point to a different choice than the fairness considerations, I’m in a rather interesting spot.
I recently faced a dilemma.
A real-estate agent called me to notify me that a property I was inquiring about was sold before auction. I was an interested party and the fact that they did not try to solicit a price from me before accepting a signed contract to another party means they did not do their best to secure the best deal for the owner. I happen to actually know the owner as well, (I have no great worries about losing the deal) I wonder if I should report the events to the owner who effectively lost out on an unknown number of dollars (AUD~$10,000-$50,000), from myself or possibly a number of other interested parties who might have taken the opportunity to bid—had the property either gone to auction or been offered to other parties before the auction.
Extra info: The owner is currently unwell and does not need any kind of further stress in their life; also I don’t think anything can be done to change the situation as contracts have been signed (also this was a week ago); also property prices in this local marketplace have gone wild recently, causing stupid things like this to happen—probably frequently. I wonder if regulation of the bidding marketplace would make this less likely to happen.
There are probably cases where it’s rational for a real-estage agent to sell a property before an auction. An auction could well return less money than the other party offered.
You can write regulations to fix single problems. That comes with a cost. It increases bureaucracy. More forms have to be filled. It’s more complex to sell property.
In general I would expect the free market to solve this issue better than a government produced system.
Auctions aren’t free. There are fees to list items at auction and the organization running the auction is likely to collect a certain percentage of the final price as well.
The owner chose them, did he not?
And so he has responsibility for his (supposed) loss. Doing their best isn’t in the contract.
Now the question is whether the owner would like to know about it. I think not, unless he plans on making use of their service again.
On top of that, you say they didn’t solicit a price from you before selling, and so you may have thought about the price more seriously, had they done so, and maybe this extra number of dollars wouldn’t be there if there wasn’t the bias of possible anger/pride (no offense). For all we know, the sale could have been a good sale, and that’s why they didn’t auction. They’re the real-estate agents, and you’re the someone who they didn’t solicit. Sorry if that’s a bit rude (I have testosterone problems), but that’s the way it is. They’re not malicious agents, they do what they can. I think it’s better to let it be (but I’m also unsure).
(Also, if they had to chase every best offer then there would be no end to it. They’re making trade-offs and maybe you valued the property differently than most people would value it. It’s not so easy to say that they’re being incompetent when we’re incompetent in this domain ourselves.)
(Sounding a bit rude, but thats okay I can deal with that in my own head) Yes it was a good sale; well above the asking price, but considering the market now; they should know better. This was an easy way for them to make their commission before auction, but they also probably lost out on a few extra thousands of dollars for themselves.
My concern is not so much about myself; but about not soliciting the rest of the market (all the parties with contracts) before auction.
I am currently in the decision to let it be. (you raise a good point that the owner has part responsibility, however real-estate agents tend to have the upper hand in people manipulation to convince an owner to settle on a lower deal)
I was questioning your judgment for the sake of argument, but you’re probably right about the numbers. Without more knowledge of the area, it’s impossible to say if you’re being reasonable or not, and it doesn’t really matter. You say it’s not about yourself, but you wouldn’t know it if it was about yourself, and that was what I was trying to say. It’s not about you in particular, but about you being the prejudiced party. That’s something to take into account in the resolution of the dilemma. But I should be more clear/careful.
I do not disagree.
Why was that a dilemma? I though it would have been a dilemma only if both choices would have had benefits or disadvantages.
It might be possible to change things (I have limited knowledge) but would probably involve a lot of stress on the part of those involved. And I am unsure as to whether I am obliged to share this information to other parties. while it is truth; it is not helpful truth. (others may see this point differently) (I usually don’t care so much for the truth being optimum, but in this case it became part of the churning thoughts)
For clarity:
benefit of not saying anything is no more stress
benefit of saying something is more $$$
disadvantage of not saying anything—its concealing the truth
disadvantage of saying something—they may not want to know.
(Edit: making neat on formatting)
If you think that blissful ignorance is bad, then both choices do have disadvantages.
Separate comment post for a separate top-level moral dilemma. (x2)
I have been aware of safety failures in a workplace. (several in the same workplace) Recently in this country the laws were changed to impose a duty of care to any person who visits a workplace and knows something is unsafe. Each worker has a duty of care to themselves; and to other workers as well as other visitors to the workplace. (this includes for example—if someone walks onto a building site without a hard hat, and an incident happens where they would have been protected if they had a hard hat—you as a worker are liable for not stopping them from being there unsafely)
Various equipment that has had safety features disabled and a culture of unsafe practices. whenever I mention to either the person doing something unsafe; or the supervisor—the response seems to be, “don’t worry” or “the person does that at their own risk”.
At some point I believe that an intervention should be made; but I feel like to try any harder would be to overstep my authority in the workplace. (there are systems in place to report a workplace that is unsafe but I don’t feel like I can use them).
Another example is that the workplace is a very loud environment (well above safe levels); and people should be wearing ear protection for long exposure. Because there is a culture of unsafe behaviour; any new worker does not start wearing earplugs and just joins the rest of workers in being equally unprotected from loud noise. when I raised this with the highest boss—he replied that they don’t wear ear protection when they are provided anyway so they don’t bother to provide it any more.
I expect in 10+ years a worker will become deafened and will complain to the workplace; I don’t know who will be to blame, but I also don’t know what I can do to make a difference. (I wear ear protection always, and surprise myself if I take it off just as I leave)
Dilemma: Things are wrong, people are harming themselves | I can’t seem to fix it by talking to them, the bosses or demonstrating the right things to do.
Do you think this dilemma is similar to the situation of having an acquaintance who smokes?
The ear protection part is definitely similar—where by gradual harm people are causing themselves damage. Part of what makes smoking bad is that it can take so long to see the effects, and the peer pressure can keep you there.
where the question might be, “how much are you morally bound to go out of your way to encourage someone to quit smoking for their benefit?”
One version of the question is “encourage”, but there is also another version which replaces the verb with “force”. Bringing in authorities and/or legal enforcement doesn’t exact fall under “encouragement”.
where the legal encouragements around not smoking are weak at best; (no one is compelled legally to not smoke) (side note: underage obtaining of cigarettes is not related to actually smoking). I was talking about a peer-effect of encouraging people to not smoke and a possible moral drive to encourage others to be healthy.
Separate post for a separate top-level moral dilemma.
I have from time to time become aware of the possession of illegal (according to this country’s laws) drugs for a person for personal use. While this is a law-breaking behaviour; (either a stranger or someone I know well) I don’t feel like it has been my place to make it known to authorities.
Dilemma: Illegal but relatively harmless to others. Dilemma: Ruin the social presence of someone I know for the purpose of upholding the law/Ruin the day of a stranger I barely know (and not have personal consequences).
even if I don’t agree with the laws; I should encourage their upkeep; and signal their upkeep wherever possible. (try to act in a way that if all players in the ideal world acted in this way the world would be better) If people more regularly tried to adhere to the law; there may be less car accidents; less drunk driving… less other.. etc.
Shouldn’t you consider not just “does society benefit from encouraging laws to be enforced?”, but also “does society benefit from encouraging laws like this to be enforced?” Helping enforce a bad law encourages society to produce and enforce more laws like that one, not just laws in general.
Would you report someone to the authorities if they were gay and that was illegal?
(See the other reply; http://lesswrong.com/lw/m6c/wild_moral_dilemmas/cdlg)
I wouldn’t be encouraging more people to be flaunting the law to change the law. I would also not be reporting instances of the breaking of the law that I became aware of where I did not feel they should receive punshment for their actions.
I don’t believe I can conclude on my own whether society benefits from the existence of laws, nor can I say that any one law benefits society, (I will leave that up to the research and statistics)
(for a side note into an interesting debate—what would you be saying to me if my answer to your question was—yes; breaking of the law should be reported?)
flouting
thanks. not sure if I should correct because then it makes this comment irrelevant.
I believe that I can conclude on my own that a law making homosexuality punishable by death does not benefit society.
I’d be pointing out that it’s the equivalent of saying “it’s wrong to lie, even if that means the Gestapo would find out about the Jews in your basement” or “you should always keep your promise, even if you promised to kill your firstborn”—it’s an extreme position that is great at signalling commitment to a position because you″ll probably never have to make good on it. If you alieved that breaking of the law should be reported, even if the law says that homosexuality is punishable by death, then you’re just a human Clippy and need to be treated accordingly.
I purposely removed the specific cases, to talk about the more general concept of “law”. Humans will have great difficulty having a reasonable debate over a specific law like in the examples you have chosen. (They are particularly emotive ones)
These statements are not mutually exclusive. I’d like to try again to be clear that I meant “The existence of laws” not “the existence of this one specific law”.
In the interest of demonstrating (my point) the inability to reason one law’s benefits to society (or to prove your point) - please reason out your entire conclusion from start to finish of why:
I expect this reasoning to be some thousands of words long to reason out entirely your point. (because precisely my point is that its not that simple)
For your second comment; Can you make the argument without referring to a specific law?
(also please refrain from making judgements on others, feel free to judge an argument, tear it to shreds; but not the person who makes it)
Apologies for the edit: I seem to be having troubles getting formatting to work the way I want it to.
What does “nor can I say that any one law benefits society” mean, then, if not “for all X, where X is a law, I can’t say that X benefits society”?
If your statement applies to all laws, it also applies to worst case scenario laws.
(My bad—bad use of words the first time around)
“nor can I say that any one law benefits society” “for all X, where X is a law, I can’t say that X benefits society”
how does: “for one X alone, where X is a law, I can’t say that X benefits society” Sound?
Please explain (or expand) the reference to “worst case scenarios”?
Really, can I see your argument?
No, for the same reason Clippy can’t see my argument that it’s not beneficial to tile the universe with paper clips. All arguments like this depend on certain premises, and you either share those premises with me or you don’t. If you don’t, no argument can be given. And in this case, if you do, the argument is trivial.
So from your point of view nearly all people up to ~50 years ago as well as most people today are clippies?
If nearly all people 50 years ago thought homosexuality should be punishable by death, then there would have been quite a lot of executed homosexuals back then. There were in Nazi Germany, but Nazi Germany is not “nearly all people” and when Germany did stop mass murdering homosexuals, it didn’t happen because someone made a successful argument.
If nearly all people today think homosexuality should be punishable by death, there would be a lot more executed than actually are. Of course, there are large groups which still think so, but I doubt they can be persuaded by argument, and in that sense they are equivalent to Clippy.
I strongly disagree. Laws are made for a variety of reasons, some of them are quite bad and/or immoral. I feel that the inclination to “encourage the upkeep” of a law just because it’s a law is an entirely wrong way to go about it.
Some laws are bad and for them to go away they need to encounter pushback.
The following is not a well reasoned out thought: where there are options of actions in life including:
Breaking a “not good law”
Protesting a “not good law”
Campaigning to change a “not good law”
Encouraging others to also break a “not good law”
I would not be encouraging anyone that breaking said law is the best way to have it changed.
Where I don’t think restriction on lockpicking is a good law to have; I would not be encouraging anyone to take up lockpicking in protest of the law that I don’t think is a good law. (For some background—lockpicking is pretty easy; the only reason our locks are not more immune to lock picking is something of security-through-obscurity where if no one knows how to pick a lock; we don’t need lock-pick-proof locks. In ~10 years metal 3D printing of bump keys will probably make most of our current locks a lot more useless than they currently are, we should probably make changes now in preparation of that)
The nature of the legal system currently (while I am no expert) is that the whole body is taken to be one body of law. And to break one law is to break the social contract that you live by in society. (I am no expert but) Some reading that might help explain what I am going on about: http://en.wikipedia.org/wiki/Social_contract http://en.wikipedia.org/wiki/Crito Although I would be pleased to be shown how out of my depth I am...
I don’t understand what that means.
Well, I don’t know about Australia, but in the US it’s pretty impossible to live without breaking laws (this is by design, in case you’re wondering). There is an interesting book about it. I think you have a highly idealistic perception of how the legal system works.
I went in search of the well reasoned out philosophy of law that I was trying to impart with some sentences of my own and then I stumbled across this quote:
source of discussion (Not an amazing source; but some interesting points are raised)
To break the law; and “pushback” on the law, as you described it; is still illegal. Such is the nature of the law. Just because its not a good law; And breaking it might be the right thing to do; does not mean that what you are doing is legal or above the law.
In seeking clarity I would like to separate right and wrong from legal and illegal. These are entirely separate things. One should signal abiding legally first; then consider subjective right and wrong afterwards.
Yes, I think it’s an excellent approach.
And that is what I disagree with. I think one should consider subjective right and wrong first and then decide what to do about the legal aspect.
Where right and wrong may not initially be clear—the legal system (usually) has an existing opinion on the matter (or at least a way to work through it), and where (I believe) the legal system was built for the purpose of assisting with right and wrong.
To take an example that I really don’t want to use; It was once believed that some of the now common sexual practices was sexual deviance and was murderous in magnitude of wrongness.
the subjective right and wrong at the time would have said that these actions are wrong. The legal system at the time would have also said that these actions are wrong. If we consider now that subjecting right and wrong has changed, (although slow to catch up) so has the legal system.
The legal system was built to provide a framework for punishment to occur for actions that are subjectively wrong ( the legal system exists for several reasons, some of them are: Justice; deterence; punishment; order).
The legal system is not meant to be anything but in line with right and wrong. (with the disadvantage of being slow moving to catch up) (examples of slow moving might also include; patents—especially on programming and gene technology, digital crimes, individual freedom to not be monitored)
Sorry, still disagreeing. The justice system enforces a particular set of rules for a society. It’s purpose is not to assist, but to enforce which seems obvious to me. The purpose of enforcement is to shape the behaviour by providing strong disincetives to certain activities declared criminal.
There is, of course, a correlation between what most of the population considers to be morally wrong and what is illegal. But it’s only a correlation and not a perfect match. The justice system is also bent to serve the interests of the powerful at the expense of the powerless.
I don’t know about that. If it’s “subjective”, doesn’t it depend on the person? Are you willing to accept the moral opinion of the majority as “right”?
Surely it is. It is meant to provide a society with a set of rules to keep it running, keep certain social groups powerful and other powerless (aka keeping the proles under control), etc. etc.
Take, I don’t know, say, licensing laws which regulate which professions must have a license to practice and which need not. Is there really a moral distiniction there?
The laws are many and their number is literally uncountable. I am not willing to believe that all these thousands of laws and regulations stem from an attempt to “assist with right and wrong”.
Do the laws stem from an attempt to assist with right and wrong? this question should be easy to answer; it would only be a matter of finding one that does not (from the uncountable set). I will not actively look; but keep my eye out as I encounter legalese and continue to ask the question.
While the law can have other goals i.e. control of people by other people. I don’t think this is a primary goal and might be a subversion of the purpose (just because someone could and did does not mean that is the way it should be.)
I sincerely hope I never find any active law that exists for purposes other than to assist with right and wrong. (otherwise I should be motivated to try to change it)
Isn’t this a case of fundamental attribution error? Kicking one dog does not permanently turn your essence into irrevocable dog-kickerhood.
It’s a step on a road. If the step flows from what you are, one step implies the whole path. If you do not want to take the whole path (TDT and Kant advise us) do not take even one step.
Ahem.
On moral dilemmas in general:
I think it’s a case of a lot of things, but fundamental attribution error isn’t one of them.
It’s funny you should mention kicking dogs, as I think animal cruelty (and cruelty in general) is an example of one of the strongest rationales for virtue ethics. I don’t attach a lot of moral weight to dogs, but if I witnessed someone kicking a dog, especially if they thought they weren’t being witnessed, that gives me insight into what sort of person they are. They are displaying characteristics I do not favour.
People would be more inclined to trust and deal with me if I display pro-social characteristics they favour (and don’t display characteristics they disfavour). There are a couple of approaches to me taking advantage of this:
1) I could conspicuously display pro-social characteristics when I believe I’m under scrutiny and it’s not too costly to do so.
2) I could make myself the sort of person who is pro-social and does pro-social things, even when it’s costly or unobserved.
For sure, option 2 is more expensive than option 1, but it has the advantage of being more straightforward to maintain, and when costly opportunities to signal my pro-social virtues come along, I will take them, while those option 1 people will welch out.
If I kick a single dog in private, this erodes the basis of having taken option 2. If anyone sees me kicking a dog in private, this will undermine their trust in me. As such, I should try as much as is reasonably possible to be the sort of person who doesn’t kick dogs.
If a dog runs to your kid, teeth bared, you probably kick it away without having a dilemma; but if pushing a fat man to die saves a bunch of kids, you have to decide to do it?
I mostly have (maybe) dilemmas of the kind ‘if I spend another hour at work, I will finish the task, but not make dinner’ which does have implications for me as a housewife; or (in the past) ‘if I fine this obviously poor flower seller, she might not earn her dinner, but others here will be less inclined to sell Cyclamen kuznetzovii’. (This latter is based on several assumptions, of course.)
This argument works equally well when you replace “kicking dogs” with “playing violent video games” or “being an atheist in a place where you are expected to be a religious believer”. But I would guess that most people here do not see it as a valid reason to stop those things.
I don’t claim that not kicking dogs is a universal moral imperative. I claim that having some internal feature that dissuades you from kicking dogs means I will like and trust you more, and be more inclined to cooperate with you in a variety of social circumstances. This is not because I like dogs, but because that feature probably has some bearing on how you treat humans, and I am a human, and so are all the people I like.
I obviously can’t directly inspect the landscape of your internal features to see if “don’t needlessly hurt things” is in there, but if I see you kicking a dog, I’m going to infer that it’s not.
Again, that can be said of violent video games or atheism. Or to generalize it a bit, it applies to putting conformity above individualism. If I have some internal feature that leads me to do exactly the things you like, you will like and trust me more and be more inclined to cooperate with me. This is true whether those things are “don’t kick dogs”, “don’t play violent video games”, “believe in God”, “be heterosexual”, or “go and kill members of the outgroup”. It doesn’t matter whether God actually exists for this to be true.
It is a property of the way human brains work that a human who kicks dogs in likely to be cruel in other ways. Similar arguments may apply to some of the items on your list although the amount varies by item and many are currently subject to mind-killing debate.
I think we’re talking past each other here. I’m not talking about how to cooperate with anybody, or how to cooperate in a value-hostile social environment. I’m talking about how I can cooperate with people I want to cooperate with.
I’m talking about that too. For slightly different values of “you”, where “you” want to cooperate with fellow religious believers because you think they are more likely to share your desires and values.
Well if we’re talking about that version of “me”, why not talk about the version of “me” who’s a member of the International Dog-Kicking Association? For any given virtue you can posit some social context were that virtue is or is not desirable. I’m not sure what that accomplishes.
The International Dog-Kicking Association is something you just made up, so the fact that a rule fails when applied to it doesn’t mean the rule will cause any problems in real life. Religion actually exists.
I really don’t know what we’re actually disagreeing about here, so I’m going to tap out. Have a nice evening.
(If it’s not evening where you are yet, then have a tolerable rest of the day, and then have a nice evening)
Also, on the broader subject of fundamental attribution error, in some cases there are fundamental attributes. If I see someone exhibiting sadistic tendencies (outside of a controlled consensual environment), I don’t care how bad a day they’re having. Unless I can at all avoid it, I don’t want them on my team.
FAE is a complicated issue. It is an error of prediction sure, but not an error of passing moral judgement. It means, if average normal people can do bad things in bad circumstances, if that is the most common case, then being an average normal person is simply not good enough, so we need a secular version of “we are all sinners”.
I should write longer about it. The crux of the issue is that we tend to think being average means you are okay. Because that is literally how our minds work, our moral instincts are based on social approval in some prehistoric tribe, so being a typical tribe member has okay written all over it. That is why we like to think people who do bad have an abnormal “essence”. But if FAE shows it is not so, that most people who look like they did something horrible were driven there under the pressure of uniquely bad circumstances, we have only two choices. We must admit average means bad. Either forgive everybody or damn everybody. Guess what, the second is safer. We have historical experience with a we are all sinners view. It kinda functioned. We don’t have much with a set all the innocent souls in the prisons free type of stuff.
Or we silently forget FAE and go on with ancient common of ritually excommunicating / scapegoating (Rene Girard) people who did bad under the pressure of bad circumstances and pretend they are made of a rotten essence, so that we can salvage our illusion that most people are good and thus would not do bad in bad circumstances. Perhaps this noble lie works best. As this has also a lot of historical testing behind it.
To keep to the dog-kicking example, there are 3 kinds of people:
People who’d never kick a dog in any circumstances.
People who’d normally never kick a dog, but might do it if the dog keeps running in front of their feet when they urgently need to catch a train to get to a job interview that might save them from having to live under a bridge.
People who love kicking dogs and do it anytime they think they can get away with it.
Maybe you think that the 2s aren’t good enough, but surely they’re a whole lot better than the 3s (IMO they’re quite close to 1s, much closer than to the 3s). The FAE is what happens when you see a 2 kicking a dog for the first and only time in his life, and you decide he’s a 3.
It is a bit of a deeper issue. Let’s take something truly unacceptable, like rampaging murder. Doesn’t FAE say a normal person can be provoked into it? If yes, the normal person is not good enough.
All the FAE says is that people tend to attribute things to other people’s innate characteristics, when in fact their circumstances may be much more important, but in their own case they explain any bad acts by pointing at the circumstances. It doesn’t say that people don’t have any innate tendencies at all.
In fact, it’s just as valid to say that the FAE is about our refusal to admit some of our innate tendencies are bad.