“Failing to reward deliberate ignorance” doesn’t equal “Punishing deliberate ignorance.” The issue here is not the ignorance, the issue is in making ignorance a superior moral state to knowledge.
Take ethics out of it: Suppose you were the server admin for the Universe Server Company, where all existing universes are simulated for profit. Suppose that happy universes cost more resources to run than unhappy universes, and cost our imaginary company more money than they make; “lukewarm” universes, which are neither happy nor unhappy, make just as much money as unhappy universes. If the USC were required by law to make any universes it discovered to be less-than-Happy universes Happy, what do you suppose company policy would be about investigating the happiness level of simulated universes?
How do you suppose people who feel obligations to those worse-off than they are cope with this sense of obligation?
“Failing to reward deliberate ignorance” doesn’t equal “Punishing deliberate ignorance.”
The practical effect of this system amounts to punishing ignorance. Someone who remains ignorant takes a risk that he is being unknowingly immoral and therefore will be punished, and he can only alleviate that risk by becoming less ignorant.
In your analogy, we would “fail to reward deliberate ignorance” by requiring the Universe Server Company to make all the universes happy whether they discovered that or not. That would indeed impose an obligation upon them to do nothing but check universes all the time (until they run out of universes, but if the analogy fits, this isn’t possible).
Ah! You’re assuming you have the moral obligation with or without the knowledge.
No, I take the moral obligation away entirely. For the USC, this will generally result in universes systematically becoming lukewarm universes. (Happy universes become downgraded since it saves money, unhappy universes become upgraded since it costs the company nothing, the incentive for the search being fueled by money-saving approaches, and I’m assuming a preference by the searchers for more happiness in the universes all else being equal.)
A law which required universal “Happiness” would just result in USC going bankrupt, and all the universes being turned off, once USC started losing more money than they could make. A law which required all universes -discovered- to be less than Happy to be made into Happy universes just results in company policy prohibiting looking in the first place.
So in your original example, both the rich man aware of misery and the rich man ignorant of it have no moral obligation?
If that’s what you mean, I would describe the old system as “punishing knowledge” rather than “rewarding ignorance” since the baseline under your new system is like lack of knowledge under the old system.
I also suspect not many people would agree with this system.
So in your original example, both the rich man aware of misery and the rich man ignorant of it have no moral obligation?
Correct.
If that’s what you mean, I would describe the old system as “punishing knowledge” rather than “rewarding ignorance” since the baseline under your new system is like lack of knowledge under the old system.
That’s what I attempted to describe it as; my apologies if I wasn’t clear.
I also suspect not many people would agree with this system.
“Failing to reward deliberate ignorance” doesn’t equal “Punishing deliberate ignorance.” The issue here is not the ignorance, the issue is in making ignorance a superior moral state to knowledge.
Take ethics out of it: Suppose you were the server admin for the Universe Server Company, where all existing universes are simulated for profit. Suppose that happy universes cost more resources to run than unhappy universes, and cost our imaginary company more money than they make; “lukewarm” universes, which are neither happy nor unhappy, make just as much money as unhappy universes. If the USC were required by law to make any universes it discovered to be less-than-Happy universes Happy, what do you suppose company policy would be about investigating the happiness level of simulated universes?
How do you suppose people who feel obligations to those worse-off than they are cope with this sense of obligation?
The practical effect of this system amounts to punishing ignorance. Someone who remains ignorant takes a risk that he is being unknowingly immoral and therefore will be punished, and he can only alleviate that risk by becoming less ignorant.
In your analogy, we would “fail to reward deliberate ignorance” by requiring the Universe Server Company to make all the universes happy whether they discovered that or not. That would indeed impose an obligation upon them to do nothing but check universes all the time (until they run out of universes, but if the analogy fits, this isn’t possible).
Ah! You’re assuming you have the moral obligation with or without the knowledge.
No, I take the moral obligation away entirely. For the USC, this will generally result in universes systematically becoming lukewarm universes. (Happy universes become downgraded since it saves money, unhappy universes become upgraded since it costs the company nothing, the incentive for the search being fueled by money-saving approaches, and I’m assuming a preference by the searchers for more happiness in the universes all else being equal.)
A law which required universal “Happiness” would just result in USC going bankrupt, and all the universes being turned off, once USC started losing more money than they could make. A law which required all universes -discovered- to be less than Happy to be made into Happy universes just results in company policy prohibiting looking in the first place.
So in your original example, both the rich man aware of misery and the rich man ignorant of it have no moral obligation?
If that’s what you mean, I would describe the old system as “punishing knowledge” rather than “rewarding ignorance” since the baseline under your new system is like lack of knowledge under the old system.
I also suspect not many people would agree with this system.
Correct.
That’s what I attempted to describe it as; my apologies if I wasn’t clear.
We are in agreement here.