I’m confused by the terminology, but I think I would be a relativist objectivist.
I certainly think that morality is relative—what is moral is agent-dependent—but whether or not the agent is behaving morally is an objective fact about that agent’s behavior, because the behavior either does or doesn’t conform with that agent’s morality.
But I don’t think the distinction between a relativist objectivist and a relativist subjectivist is terribly exciting: it just depends on whether you consider an agent ‘moral’ if it conforms to its morality (relativist objectivist) or yours (relativist subjectivist).
But maybe I’ve got it wrong, because this view seems so reasonable, whereas you’ve indicated that it’s rare.
The key phrase for subjectivism is “mind dependent” so if you think other people’s morality comes from their minds then you are a relativist subjectivist.
I just realized I don’t think people should conform to their own morality, I think people should conform to my morality which I guess would make me a subjective non-relativist.
So you believe that the word morality is a two-place word and means what an agent would want to do under certain circumstances? What word do you use to means what actually ought to be done? The particular thing that you and, to a large degree all humans would want to do under specified circumstances? Or do you believe there isn’t anything that should be done other than what whatever agents exist want? Please note that that position is also a statement about what the universe ought to look like.
Yes, morality is a two-place word—the evaluation function of whether an action is moral has two inputs: agent, action. “Agent” can be replaced by anything that conceivably has agency, so morality can be considered system-dependent, where systems include social groups and all humanity, etc.
I wouldn’t say morality is what the agent wants to do, but is what the agent ought to do, given its preferences. So I think I am still using it in the usual sense.
What word do you use to means what actually ought to be done? The particular thing that you and, to a large degree all humans would want to do under specified circumstances?
I can talk about what I ought to do, but it seems to me I can’t talk about what another agent ought to do outside their system of preferences. If I had their preferences, I ought to do what they ought to do. If they had my preferences, they ought to do what I ought to do. But to consider what they ought to do, with some mixture of preferences, isn’t incoherent.
I can have a preference for what another agent does, of course, but this is different than asserting a morality. For example, if they don’t do what I think is moral, I’m not morally culpable. I don’t have their agency.
As far as I can tell, we don’t disagree on any matter of fact. I agree that we can only optimize our own actions. I agree that other agents won’t necessarily find our moral arguments persuasive. I just don’t agree that the words moral and ought should be used the way you do.
To the greater LW community: Is there some way we can come up with standard terminology for this sort of thing? I myself have moved toward using the terminology used by Eliezer, but not everyone has. Are there severe objections to his terminology and if so, are there any other terminologies you think we should adopt as standard?
You’re thinking of the wrong sense of objective. An objective morality, according to this article, is a morality that doesn’t depend on the subject’s mind. It depends on something else. I.e., if we were trying to determine what should_byrnema is, we wouldn’t look at you’re preferences, instead we would look somewhere else. So for example:
A nonrelativist objectivist would say that we would look at the one true universially compelling morality that’s written into the fabric of reality (or something like that). So should_byrnema is just should, period.
A relativist objectivist might say (this is just one example—cultural relativism), that we would look for should_byrnema in the culture that you are currently embedded in. So should_byrnema is should_culture.
I’m not sure that subjective nonrelativism is a possibility though.
I think “subjective” means based on opinion (a mind’s assessment).
If Megan-is-moral if she thinks she’s moral, then the morality of Megan is subjective and depends on her mind. If Megan is moral if I think she’s moral, then it’s subjective and depends on my mind.
I think that whether an agent is moral or not is a fact, and doesn’t depend upon the opinion/assessment of any mind. But we would still look at the agent’s preferences to determine the fact. I thought this was already described by the word ‘relative’.
I see. Just to clarify fully: in my last paragraph, morality depends on the mind because a mind is required for preferences and agency? Are there any exceptions to this?
Thanks, I do understand the framework you’re using, and can now say I don’t agree with it.
First, one wouldn’t say that morality is subjective just because the morality of an entity depends upon its preferences and agency. Even an objective morality would usually apply moral judgments only to entities with preferences and agency.
Second, subjective should mean that Megan’s action could considered moral by Fred but not moral by Tom. In other words, the morality is determined by and depends upon someone’s mind. In the relative objective morality I’ve been speaking of, neither Megan, Fred nor Tom gets to decide if Megan’s action is moral. The morality of the action is a fact of and determined by the system of Megan, her action, and the context of that action. The morality of her action is something that could be computed by something without a mind, and the morality of her action doesn’t depend on the computation actually being done.
I’m not using any framework here, just definitions. The article defined relative and subjective in certain ways in order to classify moral systems, and I’ve just been relating how the article defines these terms. There’s only semantics here, no actual inference.
Using your framing regarding what it is that we are discussing (framings cannot be avoided), perhaps I disagree with your interpretation of the phrase ‘mind dependent’.
The article writes:
In either case, it may be that what determines the difference in the two contexts is something “mind-dependent”—in which case it would be subjectivist relativism—but it need not be. Perhaps what determines the relevant difference is an entirely mind-independent affair, making for an objectivist relativism.
The article does not actually define mind-dependent. I think that by “mind-dependent”, the article means that it a mind that is doing the calculation and that assigns the morality, whereas if I am understanding your position (for example), you seem to think that “mind-dependent” means that an entity being labeled moral must have a mind. In the first paragraph of my last comment, I argued that this sense of mind-dependent would make “objective morality” more or less moot, because we hardly every talk about the morality of mindless entities.
We seem to be talking past each other, but I’m not entirely sure where the misunderstanding is, so I’ll just lay out my view of what the article says again in different terms.
A morality is subjective iff you have to look at the mind of an agent in order to determine whether they are moral. e.g., morality as preferences. A morality is objective iff you don’t look at the mind of an agent in order to determine whether they are moral. For example, a single morality “written into the fabric of the universe,” or a morality that says what is moral for an agent depends on where in the universe the agent happens to be (note that the former is not relative and the latter is, but I don’t think we’re disagreeing on what that means).
In both cases, the only type of thing being called moral is something with a mind (whatever “mind” means here). The difference is whether or not you have to look inside the mind to determine the morality of the agent.
So I’m not saying that mind dependent vs. indenpendent is the difference between having a mind and not having a mind, its the difference between looking at the mind that the agent is assumed to have and not looking at it.
That is more clear, but still describes what I thought I understood of your position. It’s rather unconventional, so it took me a while to be certain what you meant.
I think that ‘subjective’ means that a mind is assessing the morality. The key idea is that different minds could assign different moral judgements, so the judgement is mind-dependent.
In contrast, any morality that considers the state of an agent’s mind in the computation of that agent’s morality can be either objective or subjective.
For example, suppose it was written on a tablet, “the action of every agent is moral unless it is done with the purpose of harming another agent”. The tablet-law is still objective, but the computation of the morality of an action depends on the agent’s intention (and mind).
I just experienced a flicker of a different understanding, that helps me to relate to your concept of subjective. Suppose there were two tablets:
Tablet A: The action of every agent is moral unless it harms another agent.
Tablet B: The action of every agent is moral unless it is done with the purpose of harming another agent.
Tablet A measures morality based on the absolute, objective result of an action, whereas Tablet B considers the intention of an action.
Whereas this is an important distinction between the tablets, we don’t say that Tablet A is an objective morality and Tablet B is a subjective morality. There must be other terms for this distinction. I know that Tablet A is like consequentialism, and Tablet B includes, for example, virtue ethics.
It’s rather unconventional, so it took me a while to be certain what you meant.
I was just giving my interpretation of the article’s definitions. Do you think my interpretation is unconventional?
I don’t think I disagree with you about how to parse mind-dependent, I’ve just been sloppy in putting it into a definition. I would call both tablet A and tablet B objective/mind independent
So how about this for a definition of mind-dependent:
The “source” of what is moral for an agent depends on the mind of the agent.
If I understand you correctly this is my interpretation as well. But to clarify: there doesn’t even have to be an agent in the judgment itself. Take the proposed judgment: “Black holes are immoral”. This can either be subjective or objective. You are an objectivist if you look to something other than a mind to determine it’s truth value. If you think the fact about whether or not black holes are immoral can be found by looking at the universe or examining black holes, you’re an objectivist. If you ask “How do I feel about black holes”, “How does my society feel about black holes” or “How does God feel about black holes” you are a subjectivist because to determine whether or not to accede to a judgment you examine a mind of minds.
Edit: I just read byrnema’s comment and now I think I probably don’t agree with you. You could also be an objectivist or subjectivist about a judgement of a purely mental fact.
Objectivist: Jealousy is immoral because it was written onto the side all quarks.
Subjectivist: Jealousy is immoral because I don’t like jealousy.
I agree with everything in your first paragraph, and was amazed it wasn’t addressed to me. I can’t believe how complicated this turns out being due to semantics. We could really use a good systemizer in the whole morality field, to clear the confusion of these tortuously ambiguous terms. (I should add that I’m not aware that there isn’t one, but just skimming through this thread and its sisters seems to indicate one is needed.)
I’m not sure that subjective nonrelativism is a possibility though.
Surely it’s a logical possibility. Stipulate: “What’s right is either X or Y, where we ask each person in the universe to think of a random integer, sum them, and pull off the last bit, 0 meaning X is right and 1 meaning Y is right.”
Surely it’s a de facto ethical theory, since it determines entirely what the FAI should do. But then, the FAI is not supposed to be a person, so that might make a difference for our use of ‘ethical’.
EDIT: but if you thought all possible minds had the same preferences, then it would be subjective nonrelative, wouldn’t it?
Maybe, though in that unlikely event I would suspect that there’s some universal law behind that odd fact about preferences, in which case I’d think it would be objective.
I’m afraid that wasn’t enough to clear it up for me. Nor is it clear how privileging the hypothesis is relevant to a discussion of logical possibility. Or are you claiming that was the wrong domain of inquiry?
Saying “X is logically possible” bears the conversational implication that X is worth considering—it raises X to conscious attention. But when we’re talking about physical possibility, “logically possible” is the wrong criterion for raising hypotheses to conscious attention, because epistemological limitations imply that every hypothesis is logically possible. Given that we have good physical reasons to draw the opposite conclusion in this case, it is generally a mistake to emphasize the possibility.
Ah, I see what you’re getting at. But it is not that I was trying to emphasize the possibility that there cannot be non-human minds in order to argue in favor of that hypothesis. Rather, I was pointing out that whether CEV is ‘relative’ or not (for purposes of this discussion) is an empirical question. For reference, I would not guess that non-human minds are physically impossible (I’d assign significantly less than 10% probability to that hypothesis).
Once you’ve had to argue about ethics with logicians, it becomes natural. “But what if… (completely implausible hypothesis that no one believes)” comes up a lot.
I’m confused by the terminology, but I think I would be a relativist objectivist.
I certainly think that morality is relative—what is moral is agent-dependent—but whether or not the agent is behaving morally is an objective fact about that agent’s behavior, because the behavior either does or doesn’t conform with that agent’s morality.
But I don’t think the distinction between a relativist objectivist and a relativist subjectivist is terribly exciting: it just depends on whether you consider an agent ‘moral’ if it conforms to its morality (relativist objectivist) or yours (relativist subjectivist).
But maybe I’ve got it wrong, because this view seems so reasonable, whereas you’ve indicated that it’s rare.
The key phrase for subjectivism is “mind dependent” so if you think other people’s morality comes from their minds then you are a relativist subjectivist.
I just realized I don’t think people should conform to their own morality, I think people should conform to my morality which I guess would make me a subjective non-relativist.
So you believe that the word morality is a two-place word and means what an agent would want to do under certain circumstances? What word do you use to means what actually ought to be done? The particular thing that you and, to a large degree all humans would want to do under specified circumstances? Or do you believe there isn’t anything that should be done other than what whatever agents exist want? Please note that that position is also a statement about what the universe ought to look like.
Yes, morality is a two-place word—the evaluation function of whether an action is moral has two inputs: agent, action. “Agent” can be replaced by anything that conceivably has agency, so morality can be considered system-dependent, where systems include social groups and all humanity, etc.
I wouldn’t say morality is what the agent wants to do, but is what the agent ought to do, given its preferences. So I think I am still using it in the usual sense.
I can talk about what I ought to do, but it seems to me I can’t talk about what another agent ought to do outside their system of preferences. If I had their preferences, I ought to do what they ought to do. If they had my preferences, they ought to do what I ought to do. But to consider what they ought to do, with some mixture of preferences, isn’t incoherent.
I can have a preference for what another agent does, of course, but this is different than asserting a morality. For example, if they don’t do what I think is moral, I’m not morally culpable. I don’t have their agency.
As far as I can tell, we don’t disagree on any matter of fact. I agree that we can only optimize our own actions. I agree that other agents won’t necessarily find our moral arguments persuasive. I just don’t agree that the words moral and ought should be used the way you do.
To the greater LW community: Is there some way we can come up with standard terminology for this sort of thing? I myself have moved toward using the terminology used by Eliezer, but not everyone has. Are there severe objections to his terminology and if so, are there any other terminologies you think we should adopt as standard?
You’re thinking of the wrong sense of objective. An objective morality, according to this article, is a morality that doesn’t depend on the subject’s mind. It depends on something else. I.e., if we were trying to determine what should_byrnema is, we wouldn’t look at you’re preferences, instead we would look somewhere else. So for example:
A nonrelativist objectivist would say that we would look at the one true universially compelling morality that’s written into the fabric of reality (or something like that). So should_byrnema is just should, period.
A relativist objectivist might say (this is just one example—cultural relativism), that we would look for should_byrnema in the culture that you are currently embedded in. So should_byrnema is should_culture.
I’m not sure that subjective nonrelativism is a possibility though.
I think “subjective” means based on opinion (a mind’s assessment).
If Megan-is-moral if she thinks she’s moral, then the morality of Megan is subjective and depends on her mind. If Megan is moral if I think she’s moral, then it’s subjective and depends on my mind.
I think that whether an agent is moral or not is a fact, and doesn’t depend upon the opinion/assessment of any mind. But we would still look at the agent’s preferences to determine the fact. I thought this was already described by the word ‘relative’.
“Subjective” has many meanings. The article uses “subjective” to mean dependent on the mind in any way. Not just a mind’s assessment.
Given this definition of subjective, the article would classify your last paragraph as an example of subjective relativism.
I see. Just to clarify fully: in my last paragraph, morality depends on the mind because a mind is required for preferences and agency? Are there any exceptions to this?
yep
I dunno, my concept of mind is too fuzzy to have an answer for that.
Thanks, I do understand the framework you’re using, and can now say I don’t agree with it.
First, one wouldn’t say that morality is subjective just because the morality of an entity depends upon its preferences and agency. Even an objective morality would usually apply moral judgments only to entities with preferences and agency.
Second, subjective should mean that Megan’s action could considered moral by Fred but not moral by Tom. In other words, the morality is determined by and depends upon someone’s mind. In the relative objective morality I’ve been speaking of, neither Megan, Fred nor Tom gets to decide if Megan’s action is moral. The morality of the action is a fact of and determined by the system of Megan, her action, and the context of that action. The morality of her action is something that could be computed by something without a mind, and the morality of her action doesn’t depend on the computation actually being done.
I’m not using any framework here, just definitions. The article defined relative and subjective in certain ways in order to classify moral systems, and I’ve just been relating how the article defines these terms. There’s only semantics here, no actual inference.
Using your framing regarding what it is that we are discussing (framings cannot be avoided), perhaps I disagree with your interpretation of the phrase ‘mind dependent’.
The article writes:
The article does not actually define mind-dependent. I think that by “mind-dependent”, the article means that it a mind that is doing the calculation and that assigns the morality, whereas if I am understanding your position (for example), you seem to think that “mind-dependent” means that an entity being labeled moral must have a mind. In the first paragraph of my last comment, I argued that this sense of mind-dependent would make “objective morality” more or less moot, because we hardly every talk about the morality of mindless entities.
Tyrell McAllister writes:
His understanding of subjectivist also seems to interpret ‘mind-dependent’ as requiring a mind to do the measuring.
We seem to be talking past each other, but I’m not entirely sure where the misunderstanding is, so I’ll just lay out my view of what the article says again in different terms.
A morality is subjective iff you have to look at the mind of an agent in order to determine whether they are moral. e.g., morality as preferences. A morality is objective iff you don’t look at the mind of an agent in order to determine whether they are moral. For example, a single morality “written into the fabric of the universe,” or a morality that says what is moral for an agent depends on where in the universe the agent happens to be (note that the former is not relative and the latter is, but I don’t think we’re disagreeing on what that means).
In both cases, the only type of thing being called moral is something with a mind (whatever “mind” means here). The difference is whether or not you have to look inside the mind to determine the morality of the agent.
So I’m not saying that mind dependent vs. indenpendent is the difference between having a mind and not having a mind, its the difference between looking at the mind that the agent is assumed to have and not looking at it.
That is more clear, but still describes what I thought I understood of your position. It’s rather unconventional, so it took me a while to be certain what you meant.
I think that ‘subjective’ means that a mind is assessing the morality. The key idea is that different minds could assign different moral judgements, so the judgement is mind-dependent.
In contrast, any morality that considers the state of an agent’s mind in the computation of that agent’s morality can be either objective or subjective.
For example, suppose it was written on a tablet, “the action of every agent is moral unless it is done with the purpose of harming another agent”. The tablet-law is still objective, but the computation of the morality of an action depends on the agent’s intention (and mind).
I just experienced a flicker of a different understanding, that helps me to relate to your concept of subjective. Suppose there were two tablets:
Tablet A: The action of every agent is moral unless it harms another agent.
Tablet B: The action of every agent is moral unless it is done with the purpose of harming another agent.
Tablet A measures morality based on the absolute, objective result of an action, whereas Tablet B considers the intention of an action.
Whereas this is an important distinction between the tablets, we don’t say that Tablet A is an objective morality and Tablet B is a subjective morality. There must be other terms for this distinction. I know that Tablet A is like consequentialism, and Tablet B includes, for example, virtue ethics.
I was just giving my interpretation of the article’s definitions. Do you think my interpretation is unconventional?
I don’t think I disagree with you about how to parse mind-dependent, I’ve just been sloppy in putting it into a definition. I would call both tablet A and tablet B objective/mind independent
So how about this for a definition of mind-dependent:
If I understand you correctly this is my interpretation as well. But to clarify: there doesn’t even have to be an agent in the judgment itself. Take the proposed judgment: “Black holes are immoral”. This can either be subjective or objective. You are an objectivist if you look to something other than a mind to determine it’s truth value. If you think the fact about whether or not black holes are immoral can be found by looking at the universe or examining black holes, you’re an objectivist. If you ask “How do I feel about black holes”, “How does my society feel about black holes” or “How does God feel about black holes” you are a subjectivist because to determine whether or not to accede to a judgment you examine a mind of minds.
Edit: I just read byrnema’s comment and now I think I probably don’t agree with you. You could also be an objectivist or subjectivist about a judgement of a purely mental fact.
Objectivist: Jealousy is immoral because it was written onto the side all quarks.
Subjectivist: Jealousy is immoral because I don’t like jealousy.
I agree with everything in your first paragraph, and was amazed it wasn’t addressed to me. I can’t believe how complicated this turns out being due to semantics. We could really use a good systemizer in the whole morality field, to clear the confusion of these tortuously ambiguous terms. (I should add that I’m not aware that there isn’t one, but just skimming through this thread and its sisters seems to indicate one is needed.)
The wikipedia entry turns out to be a really, really, excellent starting point.
As usual, SEP is more thorough but worse at giving you the at-a-glance summary.
Lol, it might as well have been. I couldn’t figure out which one of you had it wrong so I just replied to the most recent comment.
I’ll try to put together a map or diagram for positions in metaethics.
I’m not sure if we have a bona fide expert on metaethics hereabouts. Meta-anything gets squirrely if you’re not being really careful.
Surely it’s a logical possibility. Stipulate: “What’s right is either X or Y, where we ask each person in the universe to think of a random integer, sum them, and pull off the last bit, 0 meaning X is right and 1 meaning Y is right.”
ETA: CEV, perhaps?
Wouldn’t “Everyone should do what my moral code says they should” be subjective nonrelativism? Surely there are lots of people who believe that.
I don’t think the people who believe that, think that their own mental states are what determine the truth of their moral code.
Is CEV even an ethical theory? I thought it was more of an algorithm for extracting human preferences to put them in an AI.
Surely it’s a de facto ethical theory, since it determines entirely what the FAI should do. But then, the FAI is not supposed to be a person, so that might make a difference for our use of ‘ethical’.
hmm. Then wouldn’t it be premised on subjective relativism? (relative to humans)
Yes, I’d considered that when I wrote it, but it’s an odd use of ‘relative’ when it might be equivalent to ‘the same for everyone’.
not all possible minds, just human minds
EDIT: but if you thought all possible minds had the same preferences, then it would be subjective nonrelative, wouldn’t it?
Maybe, though in that unlikely event I would suspect that there’s some universal law behind that odd fact about preferences, in which case I’d think it would be objective.
Well I’m not sure we need to consider merely logically possible minds, and it’s logically possible that non-human minds are physically impossible.
Only in the sense that it logically possible that travel to Mars is physically impossible. The wording is deceptive.
I’m not sure what sense you’re referring to, or what you’re comparing it to, or how it’s deceptive.
Privileging the hypothesis, really.
I’m afraid that wasn’t enough to clear it up for me. Nor is it clear how privileging the hypothesis is relevant to a discussion of logical possibility. Or are you claiming that was the wrong domain of inquiry?
Saying “X is logically possible” bears the conversational implication that X is worth considering—it raises X to conscious attention. But when we’re talking about physical possibility, “logically possible” is the wrong criterion for raising hypotheses to conscious attention, because epistemological limitations imply that every hypothesis is logically possible. Given that we have good physical reasons to draw the opposite conclusion in this case, it is generally a mistake to emphasize the possibility.
Ah, I see what you’re getting at. But it is not that I was trying to emphasize the possibility that there cannot be non-human minds in order to argue in favor of that hypothesis. Rather, I was pointing out that whether CEV is ‘relative’ or not (for purposes of this discussion) is an empirical question. For reference, I would not guess that non-human minds are physically impossible (I’d assign significantly less than 10% probability to that hypothesis).
well then, I’m just not imaginative enough!
Once you’ve had to argue about ethics with logicians, it becomes natural. “But what if… (completely implausible hypothesis that no one believes)” comes up a lot.