It’s possible to have both a strong reason to care, and weak evidence, ie due the moral hazard dependent on some doubtful proposition. People often adopt precautionary principles in such scenarios.
It’s possible to have both a strong reason to care, and weak evidence, ie due the moral hazard dependent on some doubtful proposition.
I don’t think that’s the situation here though. That sounds like a description of this situation: (imagine) we have weak evidence that 1) snakes are sapient, and we grant that 2) sapience is morally significant. Therefore (perhaps) we should avoid wonton harm to snakes.
Part of why this argument might make sense is that (1) and (2) are independent. Our confidence in (2) is not contingent on the small probability that (1) is true: whether or not snakes are sapient, we’re all agreed (lets say) that sapience is morally significant.
On the other hand, the situation with qualia is one where we have weak evidence (suppose) that A) qualia are real, and we grant that B) qualia are morally significant.
The difference here is that (B) is false if (A) is false. So the fact that we have weak evidence for (A) means that we can have no stronger (and likely, we must have yet weaker) evidence for (B).
Does the situation change significantly if “the situation with qualia” is instead framed as A) snakes have qualia and B) qualia are morally significant?
Yes, if the implication of (A) is that we’re agreed on the reality of qualia but are now wondering whether or not snakes have them. No, if (A) is just a specific case of the general question ‘are qualia real?’. My point was probably put in a confusing way: all I mean to say was that Juno seemed to be arguing as if it were possible to be very confident about the moral significance of qualia while being only marginally confident about their reality.
What I think of the case for qualia is beside the point, I was just commenting on your ‘moral hazard’ argument. There you said that even if we assume that we have only weak evidence for the reality of qualia, we should take the possibility seriously, since we can be confident that qualia are morally significant. I was just pointing out that this argument is made problematic by the fact that our confidence in the moral significance of qualia can be no stronger than our confidence in their reality, and therefore by assumption must be weak.
I am guessing that Juno_Watt means that strong evidence for our own perception of qualia makes them real enough to seriously consider their moral significance, whether or not they are “objectively real”.
Yes, they often do. On your view, is there a threshold of doubtfulness of a proposition below which it is justifiable to not devote resources to avoiding the potential moral hazard of that proposition being true, regardless of the magnitude of that moral hazard?
i don’t think ir’s likely my house will catch fire, but i take out fire insurance. OTOH, if i don’t set a lower bound I will be susceptible to pascal’s muggings.
It’s possible to have both a strong reason to care, and weak evidence, ie due the moral hazard dependent on some doubtful proposition. People often adopt precautionary principles in such scenarios.
I don’t think that’s the situation here though. That sounds like a description of this situation: (imagine) we have weak evidence that 1) snakes are sapient, and we grant that 2) sapience is morally significant. Therefore (perhaps) we should avoid wonton harm to snakes.
Part of why this argument might make sense is that (1) and (2) are independent. Our confidence in (2) is not contingent on the small probability that (1) is true: whether or not snakes are sapient, we’re all agreed (lets say) that sapience is morally significant.
On the other hand, the situation with qualia is one where we have weak evidence (suppose) that A) qualia are real, and we grant that B) qualia are morally significant.
The difference here is that (B) is false if (A) is false. So the fact that we have weak evidence for (A) means that we can have no stronger (and likely, we must have yet weaker) evidence for (B).
Does the situation change significantly if “the situation with qualia” is instead framed as A) snakes have qualia and B) qualia are morally significant?
Yes, if the implication of (A) is that we’re agreed on the reality of qualia but are now wondering whether or not snakes have them. No, if (A) is just a specific case of the general question ‘are qualia real?’. My point was probably put in a confusing way: all I mean to say was that Juno seemed to be arguing as if it were possible to be very confident about the moral significance of qualia while being only marginally confident about their reality.
(nods) Makes sense.
What? Are you saying we have weak evidence for even in ourselves?
What I think of the case for qualia is beside the point, I was just commenting on your ‘moral hazard’ argument. There you said that even if we assume that we have only weak evidence for the reality of qualia, we should take the possibility seriously, since we can be confident that qualia are morally significant. I was just pointing out that this argument is made problematic by the fact that our confidence in the moral significance of qualia can be no stronger than our confidence in their reality, and therefore by assumption must be weak.
But of course it can. i can be much more confident in
(P → Q)
than I am in P. For instance, I can be highly confident that if i won the lottery, I could buy a yacht.
I am guessing that Juno_Watt means that strong evidence for our own perception of qualia makes them real enough to seriously consider their moral significance, whether or not they are “objectively real”.
Yes, they often do.
On your view, is there a threshold of doubtfulness of a proposition below which it is justifiable to not devote resources to avoiding the potential moral hazard of that proposition being true, regardless of the magnitude of that moral hazard?
i don’t think ir’s likely my house will catch fire, but i take out fire insurance. OTOH, if i don’t set a lower bound I will be susceptible to pascal’s muggings.