The discussions in moral uncertain normally use the consensual options of ethical theories(deontology, consequentialism), and then propose some kind of solution, like intertheoretical comparison. The decompositions posed in the post create more problems to solve, like the correct description of values.
I assume the author would find some kind of consequentialism as a correct theory, with settle the third uncertain. Feel free to respond if thats not the case.
I skimmed that thesis before making this post. Although it’s true that it discusses only a subset of the problems reviewed in this post, I found it problematic that it felt like the author never examined the question of what moral uncertainty really is before proposing methods for resolving it. Well, he did, but I think he avoided the most important questions. For instance:
Suppose that Jo is driving home (soberly, this time), but finds that the road is blocked by a large deer that refuses to move. Suppose that her only options are to take a long detour, which will cause some upset to the friends that she promised to meet, or to kill the deer, roll it to the side of the road, and drive on. She’s pretty sure that non-human animals are of no moral value, and so she’s pretty sure that she ought to kill the deer and drive on. But she’s not certain: she thinks there is a small but sizable probability that killing a deer is morally significant. If fact, she thinks it may be something like as bad as killing a human. Other than that, her ethical beliefs are pretty common-sensical.
As before, it seems like there is an important sense of ‘ought’ that is relative to Jo’s epistemic situation. It’s just that, in this case, the ‘ought’ is relative to Jo’s moral uncertainty, rather than relative to her empirical uncertainty. Moreover, it seems that we possess intuitive judgments about what decision-makers ought, in this sense of ought, to do: intuitively, to me at least, it seems that Jo ought not to kill the deer.
It’s this sense of ‘ought’ that I’m studying in this thesis. I don’t want to get into questions about the precise nature of this ‘ought’: for example, whether it is a type of moral ‘ought’ (though a different moral ‘ought’ from the objective and subjective senses considered above), or whether it is a type of rational ‘ought’. But this example, and others that we will consider throughout the thesis, show that, whatever the nature of this ‘ought’, it is an important concept to be studied.
(Emphasis mine)
And:
To recap: I assume that we have a decision-maker who has a credence distribution; she has some credence in a certain number of theories, each of which provides a ranking of options in terms of choice-worthiness. Our problem is how to combine these choice-worthiness rankings, indexed to degrees of credence, into one appropriateness-ranking: that is, our problem is to work out what choice-procedures are most plausible. [...]
Before moving on, we should distinguish subjective credences, that is, degrees of belief, from epistemic credences, that is, the degree of belief that one is epistemically justified in having, given one’s evidence. When I use the term ‘credence’ I refer to epistemic credences (though much of my discussion could be applied to a parallel discussion involving subjective credences); when I want to refer to subjective credences I use the term ‘degrees of belief’.
The reason for this is that appropriateness seems to have some sort of normative force: if it is most appropriate for someone to do something, it seems that, other things being equal, they ought, in the relevant sense of ‘ought’, to do it. But people can have crazy beliefs: a psychopath might think that a killing spree is the most moral thing to do. But there’s no sense in which the psychopath ought to go on a killing spree: rather, he ought to revise his beliefs. We can only capture that idea if we talk about epistemic credences, rather than degrees of belief.
But there seems to be no discussion of what it means to have a “degree of belief” in an ethical theory (or if there is, I missed it). I would expect that reducing the concept to the level of cognitive algorithms would be a necessary first step before we could say anything useful about it.
The discussions in moral uncertain normally use the consensual options of ethical theories(deontology, consequentialism), and then propose some kind of solution, like intertheoretical comparison. The decompositions posed in the post create more problems to solve, like the correct description of values. I assume the author would find some kind of consequentialism as a correct theory, with settle the third uncertain. Feel free to respond if thats not the case.
I skimmed that thesis before making this post. Although it’s true that it discusses only a subset of the problems reviewed in this post, I found it problematic that it felt like the author never examined the question of what moral uncertainty really is before proposing methods for resolving it. Well, he did, but I think he avoided the most important questions. For instance:
(Emphasis mine)
And:
But there seems to be no discussion of what it means to have a “degree of belief” in an ethical theory (or if there is, I missed it). I would expect that reducing the concept to the level of cognitive algorithms would be a necessary first step before we could say anything useful about it.