What kind of moral dilemmas do you actually encounter?
None. I’m a virtue ethicist, more or less, of an Objectivist bent. A “dilemma”, to me, is a choice between two equally good things (which virtue I want to emphasize), rather than two equally bad things.
Do you have any thoughts on how much moral judgement you have to exercise in your daily life? Do you think this is a typical amount?
It feels like “None.”
Do you have any examples of pedestrian moral dilemmas to which you’ve applied abstract moral reasoning? How did that work out?
No.
Do you have any examples of personal moral dilemmas on a Trolley Problem scale that nonetheless happened?
No.
“Trolley Problems” are less about describing genuinely difficult situations, and more about trying to find faults with ethical systems or decision theories by describing edge scenarios. To me, they’re about as applicable as “Imagine there’s an evil alien god who will kill everyone if you’re a utilitarian. What is the most utilitarian thing to do?”
ETA: In fairness, though, I don’t see any ethical issue in the Trolley Problem to begin with, unless you tied all the people to the tracks in the first place. I regard any ethical system as fatally flawed which makes a rich man who walks through a rich neighborhood and is completely ignorant of any misery -more ethical- than a rich man who is aware of but does nothing about misery. Whether or not you qualify as a “good” person shouldn’t be dependent upon your environment, and any ethical system which rewards deliberate ignorance is fatally flawed.
Failing to reward deliberate ignorance has its own problems: all ignorance is “deliberate” in the sense that you could always spend just a bit more time reducing your ignorance. How do you avoid requiring people to spend all their waking moments relieving their ignorance?
“Failing to reward deliberate ignorance” doesn’t equal “Punishing deliberate ignorance.” The issue here is not the ignorance, the issue is in making ignorance a superior moral state to knowledge.
Take ethics out of it: Suppose you were the server admin for the Universe Server Company, where all existing universes are simulated for profit. Suppose that happy universes cost more resources to run than unhappy universes, and cost our imaginary company more money than they make; “lukewarm” universes, which are neither happy nor unhappy, make just as much money as unhappy universes. If the USC were required by law to make any universes it discovered to be less-than-Happy universes Happy, what do you suppose company policy would be about investigating the happiness level of simulated universes?
How do you suppose people who feel obligations to those worse-off than they are cope with this sense of obligation?
“Failing to reward deliberate ignorance” doesn’t equal “Punishing deliberate ignorance.”
The practical effect of this system amounts to punishing ignorance. Someone who remains ignorant takes a risk that he is being unknowingly immoral and therefore will be punished, and he can only alleviate that risk by becoming less ignorant.
In your analogy, we would “fail to reward deliberate ignorance” by requiring the Universe Server Company to make all the universes happy whether they discovered that or not. That would indeed impose an obligation upon them to do nothing but check universes all the time (until they run out of universes, but if the analogy fits, this isn’t possible).
Ah! You’re assuming you have the moral obligation with or without the knowledge.
No, I take the moral obligation away entirely. For the USC, this will generally result in universes systematically becoming lukewarm universes. (Happy universes become downgraded since it saves money, unhappy universes become upgraded since it costs the company nothing, the incentive for the search being fueled by money-saving approaches, and I’m assuming a preference by the searchers for more happiness in the universes all else being equal.)
A law which required universal “Happiness” would just result in USC going bankrupt, and all the universes being turned off, once USC started losing more money than they could make. A law which required all universes -discovered- to be less than Happy to be made into Happy universes just results in company policy prohibiting looking in the first place.
So in your original example, both the rich man aware of misery and the rich man ignorant of it have no moral obligation?
If that’s what you mean, I would describe the old system as “punishing knowledge” rather than “rewarding ignorance” since the baseline under your new system is like lack of knowledge under the old system.
I also suspect not many people would agree with this system.
So in your original example, both the rich man aware of misery and the rich man ignorant of it have no moral obligation?
Correct.
If that’s what you mean, I would describe the old system as “punishing knowledge” rather than “rewarding ignorance” since the baseline under your new system is like lack of knowledge under the old system.
That’s what I attempted to describe it as; my apologies if I wasn’t clear.
I also suspect not many people would agree with this system.
None. I’m a virtue ethicist, more or less, of an Objectivist bent. A “dilemma”, to me, is a choice between two equally good things (which virtue I want to emphasize), rather than two equally bad things.
It feels like “None.”
No.
No.
“Trolley Problems” are less about describing genuinely difficult situations, and more about trying to find faults with ethical systems or decision theories by describing edge scenarios. To me, they’re about as applicable as “Imagine there’s an evil alien god who will kill everyone if you’re a utilitarian. What is the most utilitarian thing to do?”
ETA: In fairness, though, I don’t see any ethical issue in the Trolley Problem to begin with, unless you tied all the people to the tracks in the first place. I regard any ethical system as fatally flawed which makes a rich man who walks through a rich neighborhood and is completely ignorant of any misery -more ethical- than a rich man who is aware of but does nothing about misery. Whether or not you qualify as a “good” person shouldn’t be dependent upon your environment, and any ethical system which rewards deliberate ignorance is fatally flawed.
Failing to reward deliberate ignorance has its own problems: all ignorance is “deliberate” in the sense that you could always spend just a bit more time reducing your ignorance. How do you avoid requiring people to spend all their waking moments relieving their ignorance?
“Failing to reward deliberate ignorance” doesn’t equal “Punishing deliberate ignorance.” The issue here is not the ignorance, the issue is in making ignorance a superior moral state to knowledge.
Take ethics out of it: Suppose you were the server admin for the Universe Server Company, where all existing universes are simulated for profit. Suppose that happy universes cost more resources to run than unhappy universes, and cost our imaginary company more money than they make; “lukewarm” universes, which are neither happy nor unhappy, make just as much money as unhappy universes. If the USC were required by law to make any universes it discovered to be less-than-Happy universes Happy, what do you suppose company policy would be about investigating the happiness level of simulated universes?
How do you suppose people who feel obligations to those worse-off than they are cope with this sense of obligation?
The practical effect of this system amounts to punishing ignorance. Someone who remains ignorant takes a risk that he is being unknowingly immoral and therefore will be punished, and he can only alleviate that risk by becoming less ignorant.
In your analogy, we would “fail to reward deliberate ignorance” by requiring the Universe Server Company to make all the universes happy whether they discovered that or not. That would indeed impose an obligation upon them to do nothing but check universes all the time (until they run out of universes, but if the analogy fits, this isn’t possible).
Ah! You’re assuming you have the moral obligation with or without the knowledge.
No, I take the moral obligation away entirely. For the USC, this will generally result in universes systematically becoming lukewarm universes. (Happy universes become downgraded since it saves money, unhappy universes become upgraded since it costs the company nothing, the incentive for the search being fueled by money-saving approaches, and I’m assuming a preference by the searchers for more happiness in the universes all else being equal.)
A law which required universal “Happiness” would just result in USC going bankrupt, and all the universes being turned off, once USC started losing more money than they could make. A law which required all universes -discovered- to be less than Happy to be made into Happy universes just results in company policy prohibiting looking in the first place.
So in your original example, both the rich man aware of misery and the rich man ignorant of it have no moral obligation?
If that’s what you mean, I would describe the old system as “punishing knowledge” rather than “rewarding ignorance” since the baseline under your new system is like lack of knowledge under the old system.
I also suspect not many people would agree with this system.
Correct.
That’s what I attempted to describe it as; my apologies if I wasn’t clear.
We are in agreement here.