Will and I just spoke on the phone, so here’s another way to present our discussion:
Imagine a species of artificial agents. These agents have a list of belief statements that relate physical phenomena to normative properties (let’s call them ‘moral primitives’):
‘Liking’ reward signals in human brains are good.
Causing physical pain in human infants is forbidden.
etc.
These agents also have a list of belief statements about physical phenomena in general:
Sweet tastes on the tongue produces reward signals in human brains.
Cutting the fingers of infants produces physical pain in infants.
Things are made of atoms.
etc.
These agents also have an ‘ought’ function that includes a series of logical statements that relate normative concepts to each other, such as:
A thing can’t be both permissible and forbidden.
A thing can’t be both obligatory and non-obligatory.
etc.
Finally, these robots have actuators that are activated by a series of rules like:
When the agent observes an opportunity to perform an action that is ‘obligatory’, then it will take that action.
An agent will avoid any action that is labeled as ‘forbidden.’
Some of these rules might include utility functions that encode ordinal or cardinal value for varying combinations of normative properties.
These agents can’t see their own source code. The combination of the moral primitives and the ought function and the non-ought belief statements and a set of rules about behavior produces their action and their verbal statements about what ought to be done.
From their behavior and verbal ought statements these robots can infer to some degree how their ought function works, but they can’t fully describe their ought function because they haven’t run enough tests or the ought function is just too complicated or the problem is made worse because they also can’t see their moral primitives.
The ought function doesn’t reduce to physics because it’s a set of purely logical statements. The ‘meaning’ of ought in this sense is determined by the role that the ought function plays in producing intentional behavior by the robots.
Of course, the robots could speak in ought language in stipulated ways, such that ‘ought’ means ‘that which produces pleasure in human brains’ or something like that, and this could be a useful way to communicate efficiently, but it wouldn’t capture what the ought function is doing or how it is contributing to the production of behavior by these agents.
What Will is saying is that it’s convenient to use ‘ought’ language to refer to this ought function only, and not also to a combination of the ought function and statements about physics, as happens when we stipulatively use ‘ought’ to talk about ‘that which produces well-being in conscious creatures’ (for example).
I’m saying that’s fine, but it can also be convenient (and intuitive) for people to use ‘ought’ language in ways that reduce to logical-physical statements, and not only in ways that express a logical function that contains only transformations between normative properties. So we don’t have substantive disagreement on this point; we merely have different intuitions about the pragmatic value of particular uses for ‘ought’ language.
We also drew up a simplified model of the production of human action in which there is a cognitive module that processes the ‘ought’ function (made of purely logical statements like in the robots’ ought function), a cognitive module that processes habits, a cognitive module that processes reflexes, and so on. Each of these produces an output, and another module runs arg(max) on these action options to determine which actions ‘wins’ and actually occurs.
Of course, the human ‘ought’ function is probably spread across multiple modules, as is the ‘habit’ function.
Will likes to think of the ‘meaning’ of ‘ought’ as being captured by the algorithm of this ‘ought’ function in the human brain. This ought function doesn’t contain physical beliefs, but rather processes primitive normative/moral beliefs (from outside the ought function) and outputs particular normative/moral judgments, which contribute to the production of human behavior (including spoken moral judgments). In this sense, ‘ought’ in Will’s sense of the term doesn’t reduce to physical facts, but to a logical function.
I’m fine with Will using ‘ought’ in that sense if he wants. I’ll try to be clear how I am using the term when I use it.
Will also thinks that the ‘ought’ function (in his sense) inside human brains is probably very similar between humans—ones that aren’t brain damaged or neurologically deranged. I don’t know how probable this is because cognitive neuroscience hasn’t progressed that far. But if the ‘ought’ function is the same in all healthy humans, then there needn’t be a separate ‘meaning’ of ought (in Will’s sense) for each speaker, but instead there could be a shared ‘meaning’ of ought (in Will’s sense) that is captured by the algorithms of the ‘ought’ cognitive module that is shared by healthy human brains.
I’m fine with Will using ‘ought’ in that sense if he wants. I’ll try to be clear how I am using the term when I use it.
That doesn’t seem right. Compare (note that I don’t necessarily endorse the rest of this paper) :
What does the word ‘ought’ mean? Strictly speaking, this is an empirical question, about the
meaning of a word in English. Such empirical semantic questions should ideally be answered
on the basis of extensive empirical evidence about the use of the word by native speakers of
English.
As a philosopher, I am primarily interested, not in empirical questions about the
meanings of words, but in the nature of the concepts that those words can be used to
express — especially when those concepts are central to certain branches of philosophy, as
the concepts expressed by ‘ought’ are central to ethics and to the theory of rational choice
and rational belief. Still, it is often easiest to approach the task of giving an account of the
nature of certain concepts by studying the meanings of the words that can express those
concepts. This is why I shall try here to outline an account of the meaning of ‘ought’.
If you examine just one particular sense of the word “ought”, even if you make clear which sense, but without systematically enumerating all of the meanings of the word, how can you know that the concept you end up studying is the one that is actually important, or one that other people are most interested in?
I suspect there are many senses of a word like ‘ought’ that are important. As ‘pluralistic moral reductionism’ states, I’m happy to use and examine multiple important meanings of a word.
Let me expand my comment a bit, because it didn’t quite capture what I wanted to say.
I’m fine with Will using ‘ought’ in that sense if he wants.
If Will is anything like a typical human, then by “ought” he often means something other than, or more than, the sense referred to by “that sense”, and it doesn’t make sense to say that perhaps he wants to use “ought” in that sense.
When you say “I’m fine with …” are you playing the role of the Austere Metaethicist who says “Tell me what you mean by ‘right’, and I will tell you what is the right thing to do.”? But I think Austere Metaethics is not a tenable metaethical position, because when you ask a person to tell you what they mean by “right”, they will almost certainly fail to give you a correct answer, simply because nobody really understands (much less can articulate) what they mean by “right”. So what is the point of that?
Or perhaps what you meant to say instead was “I’m fine with Will studying ‘ought’ in that sense if he wants”? In that case see my grandparent comment (but consider it directed mostly towards Will instead of you).
I don’t love all your terminology, but obviously my preferred terminology’s ability to communicate my ideas on this matter has been shown to be poor.
I would emphasize less relationships between similar moral beliefs:
A thing can’t be both permissible and forbidden.
and more the assembly-line process converting general to specific
This ought function doesn’t contain physical beliefs, but rather processes primitive normative/moral beliefs (from outside the ought function) and outputs particular normative/moral judgments, which contribute to the production of human behavior (including spoken moral judgments)
I’m pretty sure the first statement here only makes sense as a consequence of the second:
The ought function doesn’t reduce to physics because it’s a set of purely logical statements. The ‘meaning’ of ought in this sense is determined by the role that the ought function plays in producing intentional behavior by the robots.
The ought function doesn’t reduce to physics because it’s a set of purely logical statements. The ‘meaning’ of ought in this sense is determined by the role that the ought function plays in producing intentional behavior by the robots.
This doesn’t make sense to me. Does 28 reduce to physics in this sense? How is this “ought” thing distinguished from all the other factors (moral errors, say) that contribute to behavior (that is, how is its role located)?
First, I would say that reducibility is a property of statements. In the sense I use it:
The statement “14+14=28” is reducible to aether.
The statement “I have 28 apples” is reducible to phyisics.
The statement “There are 28 fundamental rules that one must obey to lead a just life” is reducible to ethics.
Moral statements are irreducible to physics in the sense that “P is red” is irreducible to physics—for any particular physical “P”, it is reducible. The logical properties of P-statements, like “P is red or P is not red” are given as a set of purely logical statements—that’s their analogue of the ought-function. If P-statements had some useful role in producing behavior, they would have a corresponding meaning.
Random, probably unnecessary math:
A reducible-class is a subalgebra of the Boolean algebra of statements, closed under logical equivalence. The statements reducible to aether are those in the reducible-class generated by True and False. The statements reducible to physics are those in the reducible-class generated by “The world is in exactly state X”. The statements reducible to morality are those in the reducible-class generated by “Exactly set-of-actions Y are forbidden and set-of-actions Z are obligatory”.
Will and I just spoke on the phone, so here’s another way to present our discussion:
Imagine a species of artificial agents. These agents have a list of belief statements that relate physical phenomena to normative properties (let’s call them ‘moral primitives’):
‘Liking’ reward signals in human brains are good.
Causing physical pain in human infants is forbidden.
etc.
These agents also have a list of belief statements about physical phenomena in general:
Sweet tastes on the tongue produces reward signals in human brains.
Cutting the fingers of infants produces physical pain in infants.
Things are made of atoms.
etc.
These agents also have an ‘ought’ function that includes a series of logical statements that relate normative concepts to each other, such as:
A thing can’t be both permissible and forbidden.
A thing can’t be both obligatory and non-obligatory.
etc.
Finally, these robots have actuators that are activated by a series of rules like:
When the agent observes an opportunity to perform an action that is ‘obligatory’, then it will take that action.
An agent will avoid any action that is labeled as ‘forbidden.’
Some of these rules might include utility functions that encode ordinal or cardinal value for varying combinations of normative properties.
These agents can’t see their own source code. The combination of the moral primitives and the ought function and the non-ought belief statements and a set of rules about behavior produces their action and their verbal statements about what ought to be done.
From their behavior and verbal ought statements these robots can infer to some degree how their ought function works, but they can’t fully describe their ought function because they haven’t run enough tests or the ought function is just too complicated or the problem is made worse because they also can’t see their moral primitives.
The ought function doesn’t reduce to physics because it’s a set of purely logical statements. The ‘meaning’ of ought in this sense is determined by the role that the ought function plays in producing intentional behavior by the robots.
Of course, the robots could speak in ought language in stipulated ways, such that ‘ought’ means ‘that which produces pleasure in human brains’ or something like that, and this could be a useful way to communicate efficiently, but it wouldn’t capture what the ought function is doing or how it is contributing to the production of behavior by these agents.
What Will is saying is that it’s convenient to use ‘ought’ language to refer to this ought function only, and not also to a combination of the ought function and statements about physics, as happens when we stipulatively use ‘ought’ to talk about ‘that which produces well-being in conscious creatures’ (for example).
I’m saying that’s fine, but it can also be convenient (and intuitive) for people to use ‘ought’ language in ways that reduce to logical-physical statements, and not only in ways that express a logical function that contains only transformations between normative properties. So we don’t have substantive disagreement on this point; we merely have different intuitions about the pragmatic value of particular uses for ‘ought’ language.
We also drew up a simplified model of the production of human action in which there is a cognitive module that processes the ‘ought’ function (made of purely logical statements like in the robots’ ought function), a cognitive module that processes habits, a cognitive module that processes reflexes, and so on. Each of these produces an output, and another module runs arg(max) on these action options to determine which actions ‘wins’ and actually occurs.
Of course, the human ‘ought’ function is probably spread across multiple modules, as is the ‘habit’ function.
Will likes to think of the ‘meaning’ of ‘ought’ as being captured by the algorithm of this ‘ought’ function in the human brain. This ought function doesn’t contain physical beliefs, but rather processes primitive normative/moral beliefs (from outside the ought function) and outputs particular normative/moral judgments, which contribute to the production of human behavior (including spoken moral judgments). In this sense, ‘ought’ in Will’s sense of the term doesn’t reduce to physical facts, but to a logical function.
I’m fine with Will using ‘ought’ in that sense if he wants. I’ll try to be clear how I am using the term when I use it.
Will also thinks that the ‘ought’ function (in his sense) inside human brains is probably very similar between humans—ones that aren’t brain damaged or neurologically deranged. I don’t know how probable this is because cognitive neuroscience hasn’t progressed that far. But if the ‘ought’ function is the same in all healthy humans, then there needn’t be a separate ‘meaning’ of ought (in Will’s sense) for each speaker, but instead there could be a shared ‘meaning’ of ought (in Will’s sense) that is captured by the algorithms of the ‘ought’ cognitive module that is shared by healthy human brains.
Will, did I say all of that correctly?
That doesn’t seem right. Compare (note that I don’t necessarily endorse the rest of this paper) :
If you examine just one particular sense of the word “ought”, even if you make clear which sense, but without systematically enumerating all of the meanings of the word, how can you know that the concept you end up studying is the one that is actually important, or one that other people are most interested in?
I suspect there are many senses of a word like ‘ought’ that are important. As ‘pluralistic moral reductionism’ states, I’m happy to use and examine multiple important meanings of a word.
Let me expand my comment a bit, because it didn’t quite capture what I wanted to say.
If Will is anything like a typical human, then by “ought” he often means something other than, or more than, the sense referred to by “that sense”, and it doesn’t make sense to say that perhaps he wants to use “ought” in that sense.
When you say “I’m fine with …” are you playing the role of the Austere Metaethicist who says “Tell me what you mean by ‘right’, and I will tell you what is the right thing to do.”? But I think Austere Metaethics is not a tenable metaethical position, because when you ask a person to tell you what they mean by “right”, they will almost certainly fail to give you a correct answer, simply because nobody really understands (much less can articulate) what they mean by “right”. So what is the point of that?
Or perhaps what you meant to say instead was “I’m fine with Will studying ‘ought’ in that sense if he wants”? In that case see my grandparent comment (but consider it directed mostly towards Will instead of you).
I don’t love all your terminology, but obviously my preferred terminology’s ability to communicate my ideas on this matter has been shown to be poor.
I would emphasize less relationships between similar moral beliefs:
and more the assembly-line process converting general to specific
I’m pretty sure the first statement here only makes sense as a consequence of the second:
This doesn’t make sense to me. Does 28 reduce to physics in this sense? How is this “ought” thing distinguished from all the other factors (moral errors, say) that contribute to behavior (that is, how is its role located)?
First, I would say that reducibility is a property of statements. In the sense I use it:
The statement “14+14=28” is reducible to aether.
The statement “I have 28 apples” is reducible to phyisics.
The statement “There are 28 fundamental rules that one must obey to lead a just life” is reducible to ethics.
Moral statements are irreducible to physics in the sense that “P is red” is irreducible to physics—for any particular physical “P”, it is reducible. The logical properties of P-statements, like “P is red or P is not red” are given as a set of purely logical statements—that’s their analogue of the ought-function. If P-statements had some useful role in producing behavior, they would have a corresponding meaning.
Random, probably unnecessary math:
A reducible-class is a subalgebra of the Boolean algebra of statements, closed under logical equivalence. The statements reducible to aether are those in the reducible-class generated by True and False. The statements reducible to physics are those in the reducible-class generated by “The world is in exactly state X”. The statements reducible to morality are those in the reducible-class generated by “Exactly set-of-actions Y are forbidden and set-of-actions Z are obligatory”.