Assume there’s a threshold at which sending the ship for repairs is morally obligatory (if we’re utilitarians, that is the point at which the cost of the repairs is less than the probability*expected cost of the ship sinking, taking into account the lives aboard, but the threshold needn’t be utilitarian for this to work.)
Let’s say that the threshold is 5% - if there’s more than a 5% chance the ship will go down, you should get it repaired.
Mr. Grumpy’s thought process seems to be ‘I alieve that my ship will sink, but this alief is harmful and I should avoid it’. He is morally justified in quelling his nightmares, but he’d be morally unjustified if in doing so he rationalized away his belief ‘there’s a 10% chance my ship will sink’ to arrive at ‘there’s a 3% chance my ship will sink’ and thereby did not do the repairs.
Likewise, it’s great that Mr. Happy doesn’t want to worry, but if you asked him to bet on the ship going down, what odds would he demand? If he thinks that the probability of his ship going down is greater than 5%, then he should have gotten it refitted. If he knows he has a bias toward neglecting negative events, and he knows that his estimate of 1% is probably the result of rationalization rather than reasoning, he should get someone else to estimate or he should correct his estimate for this known bias of his.
Mr. Doc looks at this probability and deems it acceptable (so, presumably, below our action threshold). He is not guilty of anything.
Assume there’s a threshold at which sending the ship for repairs is morally obligatory.
Sorry, I am unwilling to assume any such thing. I would prefer a bit more realistic scenario where there is no well-known and universally accepted threshold. The condition of ships is uncertain, different people can give different estimates of that condition, and different people would choose different actions even on the basis of the same estimate.
In particular,
Mr. Doc looks at this probability and deems it acceptable (so, presumably, below our action threshold)
Mr.Doc has his own threshold which does not necessarily match yours or anyone else’s or even whatever passes for the society’s consensus.
Sorry, I am unwilling to assume any such thing. I would prefer a bit more realistic scenario where there is no well-known and universally accepted threshold. The condition of ships is uncertain, different people can give different estimates of that condition, and different people would choose different actions even on the basis of the same estimate.
It doesn’t have to be well-known. Morally there’s a threshold. Everyone who is trying to act morally is trying to ascertain where it should be, and everyone who isn’t acting morally is taking advantage of the uncertainty about where the threshold is to avoid spending money. That doesn’t change that there is a threshold.
Consider doctors sending patients in for surgery after a cancer screening. It is hard to estimate whether someone has cancer, and different doctors might recommend different actions on the basis of the same estimate. This does not change the fact that, in fact, there’s a place to put the threshold that balances the risk of sending in patients for unnecessary surgery and the risk of letting cancer spread. On any ethical question this threshold exists. We don’t have to be certain about it to acknowledge that judging where it is and where cases fall with respect to it is basically always what we’re doing.
Mr. Doc’s actions are morally right to the extent he’s right (given the evidence he could reasonably have acquired) about the threshold.
It doesn’t have to be well-known. Morally there’s a threshold. Everyone who is trying to act morally is trying to ascertain where it should be
So, are you assuming moral realism? That moral threshold which “is”, does it objectively exist? Is is the same for everyone, all times and all cultures?
This does not change the fact that, in fact, there’s a place to put the threshold that balances the risk
Why do you think there is one specific place? That threshold depends on, among other things, risk tolerance. Are you saying that everyone does (or should have) the same risk tolerance?
I am not sure that we’re communicating meaningfully here. I said that there’s a place to set a threshold that weighs the expense against the lives. All that is required for this to be true is that we assign value to both money and lives. Where the threshold is depends on how much we value each, and obviously this will be different across situations, times, and cultures.
You’re conflating a practical concern (which behaviors should society condemn?) and an ethical concern (how do we decide the relative value of money and lives?) which isn’t even a particularly interesting ethical concern (governments have standard figures for the value of a human life; they’d need to have such to conduct any interventions at all.) And I am less certain than I was at the start of this conversation of what sort of answer you are even interested in.
I said that there’s a place to set a threshold that weighs the expense against the lives.
Do you mean one, common threshold or do you mean an individual threshold that might be different for each person? I read you as arguing for one common threshold—if we are taking about individual thresholds then I don’t see any issues—everyone just sets them wherever they like and that’s it.
You’re conflating a practical concern (which behaviors should society condemn?)
I don’t believe I said anything about what society should condemn.
what sort of answer you are even interested in
My interest started with this, as my post noted, and it mostly focuses on determing the morality of the action solely on the basis of mental states, past and present.
I don’t believe I said anything about what society should condemn.
Well, your arguments only make sense if that is how your interpreting amoral.
My interest started with this, as my post noted, and it mostly focuses on determing the morality of the action solely on the basis of mental states, past and present.
KPier’s whole argument is that the morality of the action depends on the objective conditions of the ship and the objective evidence available to the owner. The owner’s mental processes are moral (or amoral) to the extend they cause his beliefs to aline (or fail to aline) with reality.
As far as guilt, do you think Marx’s ghost should feel guilty about the results of his philosophy, or should he just say “well I tried to improve the world”?
Well, your arguments only make sense if that is how your interpreting amoral.
That sounds strange to me, can you expand on that?
KPier’s whole argument is that the morality of the action depends on the objective conditions of the ship and the objective evidence available to the owner.
So then he disagrees with W.J.Clifford, doesn’t he? The Clifford quote is all about subjective.
That sounds strange to me, can you expand on that?
You’re objections amount the the claim that “being able to be evaluated by outside observers” should be a property of morality. This is a necessary property of theory of what society should condemn, it is less clear why it’s a necessary property of morality.
So then he disagrees with W.J.Clifford, doesn’t he? The Clifford quote is all about subjective.
And the reason the owner’s mental process is immoral is because it leads the owner to evaluate the evidence incorrectly.
You’re objections amount the the claim that “being able to be evaluated by outside observers” should be a property of morality.
Um, no, I don’t think so. I don’t think I’m making any claims about properties of morality. Mostly, I’m just poking KPier’s/Clifford’s position to check for coherence.
because it leads the owner to evaluate the evidence incorrectly.
As I posted before I don’t find any objective evidence in that quote besides the two observations that the ship was old and ship sank.
Assume there’s a threshold at which sending the ship for repairs is morally obligatory (if we’re utilitarians, that is the point at which the cost of the repairs is less than the probability*expected cost of the ship sinking, taking into account the lives aboard, but the threshold needn’t be utilitarian for this to work.)
Let’s say that the threshold is 5% - if there’s more than a 5% chance the ship will go down, you should get it repaired.
Mr. Grumpy’s thought process seems to be ‘I alieve that my ship will sink, but this alief is harmful and I should avoid it’. He is morally justified in quelling his nightmares, but he’d be morally unjustified if in doing so he rationalized away his belief ‘there’s a 10% chance my ship will sink’ to arrive at ‘there’s a 3% chance my ship will sink’ and thereby did not do the repairs.
Likewise, it’s great that Mr. Happy doesn’t want to worry, but if you asked him to bet on the ship going down, what odds would he demand? If he thinks that the probability of his ship going down is greater than 5%, then he should have gotten it refitted. If he knows he has a bias toward neglecting negative events, and he knows that his estimate of 1% is probably the result of rationalization rather than reasoning, he should get someone else to estimate or he should correct his estimate for this known bias of his.
Mr. Doc looks at this probability and deems it acceptable (so, presumably, below our action threshold). He is not guilty of anything.
Sorry, I am unwilling to assume any such thing. I would prefer a bit more realistic scenario where there is no well-known and universally accepted threshold. The condition of ships is uncertain, different people can give different estimates of that condition, and different people would choose different actions even on the basis of the same estimate.
In particular,
Mr.Doc has his own threshold which does not necessarily match yours or anyone else’s or even whatever passes for the society’s consensus.
It doesn’t have to be well-known. Morally there’s a threshold. Everyone who is trying to act morally is trying to ascertain where it should be, and everyone who isn’t acting morally is taking advantage of the uncertainty about where the threshold is to avoid spending money. That doesn’t change that there is a threshold.
Consider doctors sending patients in for surgery after a cancer screening. It is hard to estimate whether someone has cancer, and different doctors might recommend different actions on the basis of the same estimate. This does not change the fact that, in fact, there’s a place to put the threshold that balances the risk of sending in patients for unnecessary surgery and the risk of letting cancer spread. On any ethical question this threshold exists. We don’t have to be certain about it to acknowledge that judging where it is and where cases fall with respect to it is basically always what we’re doing.
Mr. Doc’s actions are morally right to the extent he’s right (given the evidence he could reasonably have acquired) about the threshold.
So, are you assuming moral realism? That moral threshold which “is”, does it objectively exist? Is is the same for everyone, all times and all cultures?
Why do you think there is one specific place? That threshold depends on, among other things, risk tolerance. Are you saying that everyone does (or should have) the same risk tolerance?
I am not sure that we’re communicating meaningfully here. I said that there’s a place to set a threshold that weighs the expense against the lives. All that is required for this to be true is that we assign value to both money and lives. Where the threshold is depends on how much we value each, and obviously this will be different across situations, times, and cultures.
You’re conflating a practical concern (which behaviors should society condemn?) and an ethical concern (how do we decide the relative value of money and lives?) which isn’t even a particularly interesting ethical concern (governments have standard figures for the value of a human life; they’d need to have such to conduct any interventions at all.) And I am less certain than I was at the start of this conversation of what sort of answer you are even interested in.
Do you mean one, common threshold or do you mean an individual threshold that might be different for each person? I read you as arguing for one common threshold—if we are taking about individual thresholds then I don’t see any issues—everyone just sets them wherever they like and that’s it.
I don’t believe I said anything about what society should condemn.
My interest started with this, as my post noted, and it mostly focuses on determing the morality of the action solely on the basis of mental states, past and present.
Well, your arguments only make sense if that is how your interpreting amoral.
KPier’s whole argument is that the morality of the action depends on the objective conditions of the ship and the objective evidence available to the owner. The owner’s mental processes are moral (or amoral) to the extend they cause his beliefs to aline (or fail to aline) with reality.
As far as guilt, do you think Marx’s ghost should feel guilty about the results of his philosophy, or should he just say “well I tried to improve the world”?
That sounds strange to me, can you expand on that?
So then he disagrees with W.J.Clifford, doesn’t he? The Clifford quote is all about subjective.
You’re objections amount the the claim that “being able to be evaluated by outside observers” should be a property of morality. This is a necessary property of theory of what society should condemn, it is less clear why it’s a necessary property of morality.
And the reason the owner’s mental process is immoral is because it leads the owner to evaluate the evidence incorrectly.
Um, no, I don’t think so. I don’t think I’m making any claims about properties of morality. Mostly, I’m just poking KPier’s/Clifford’s position to check for coherence.
As I posted before I don’t find any objective evidence in that quote besides the two observations that the ship was old and ship sank.