OK, but what I want to know is how you react to some person -whose belief system is internally consistent- who has just, say, committed a gratuitous murder. Are you committed to saying that there are no objective grounds to sanction him
There’s an ambiguity here. A standard can make objective judgments, without the selection of that standard being objective. Like meter measurements.
Such a person would be objectively afoul of a standard against randomly killing people. But let’s say he acted according to a standard which doesn’t care about that; we wouldn’t be able to tell him he did something wrong by that other standard. Nor could we tell him he did something wrong according to the one, correct standard (since there isn’t one).
But we can tell him he did something wrong by the standard against randomly killing people. And we can act consistently with that standard by sanctioning him. In fact, it would be inconsistent for us to give him a pass.
if A, seriously pondering the nature of the compulsion behind mathematical judgement, should ask, “Why ought I to believe that 68 + 57 = 125?”, and B answers, “Because it’s true”, then B is not really saying anything beyond, “Because it does”. B does not answer A’s question.
Unless A was just asking to be walked through the calculation steps, then I agree B is not answering A’s question.
But if you want somehow to reduce this to subjective goals, then it looks to me that mathematics falls by the wayside—you’ll surely allow this looks pretty dubious at least superficially.
I’m not sure I’m following the argument here. I’m saying that all normativity is hypothetical. It sounds like you’re arguing there is a categorical ‘ought’ for believing mathematical truths because it would be very strange to say we only ‘ought’ to believe 2 + 2 = 4 in reference to some goal. So if there are some categorical ‘oughts,’ there might be others.
Is it something like that?
If so, then I would offer the goal of “in order to be logically consistent.” There are some who think moral oughts reduce to logical consistency, so we ought act in a certain way in order to be logically consistent. I don’t have a good counter-argument to that, other than asking to examine such a theory and wondering how being able to point out a logical consistency is going to rein in people with desires that run counter to it any better than relativism can.
If so, then I would offer the goal of “in order to be logically consistent.” There are some who think moral oughts reduce to logical consistency, so we ought act in a certain way in order to be logically consistent. I don’t have a good counter-argument to that, other than asking to examine such a theory...
You can stop right there. If no theory of morality based on logical consistency is offered, you don’t have to do any more.
I observe that you didn’t offer a pointer to a theory of morality based on logical consistency.
For one thing, I don’t think logical consistency is quite the right criterion for reason-based objective morality. Pointing out that certain ideas are old and well documented, is offering a pointer, and is not trolling.
But we can tell him he did something wrong by the standard against randomly killing people. And we can act consistently with that standard by sanctioning him. In fact, it would be inconsistent for us to give him a pass.
I understand your point is that we can tell the killer that he has acted wrongly according to our standard (that one ought not randomly to kill people). But if people in general are bound only by their own standards, why should that matter to him? It seems to me I cannot provide him compelling grounds as to why he ought not to have done what he did, and that to punish him would be arbitrary. Sorry if I’m not getting it.
I’m not sure I’m following the argument here. I’m saying that all normativity is hypothetical. It sounds like you’re arguing there is a categorical ‘ought’ for believing mathematical truths because it would be very strange to say we only ‘ought’ to believe 2 + 2 = 4 in reference to some goal. So if there are some categorical ‘oughts,’ there might be others.
Is it something like that?
This states the thought very clearly -thanks.
If so, then I would offer the goal of “in order to be logically consistent.”
I acknowledge the business about the nature of the compulsion behind mathematical judgement is pretty opaque. What I had in mind is illustrated by this dialogue. As it shows, the problem gets right back to the compulsion to be logically consistent. It’s possible this doesn’t really engage your thoughts, though.
There are some who think moral oughts reduce to logical consistency, so we ought act in a certain way in order to be logically consistent. I don’t have a good counter-argument to that, other than asking to examine such a theory and wondering how being able to point out a logical consistency is going to rein in people with desires that run counter to it any better than relativism can.
If the view is correct, then you can at least convince rational people that it is not rational to kill people. Isn’t that an important result?
It seems to me I cannot provide him compelling grounds as to why he ought not to have done what he did, and that to punish him would be arbitrary.
When a dispute is over fundamental values, I don’t think we can give the other side compelling grounds to act according to our own values. Consider Eliezer’s paperclip maximizer. How could we possibly convince such a being that it’s doing something irrational, besides pointing out that its current actions are suboptimal for its goal in the long run?
Thanks for the link to the Carroll story. I plan on taking some time to think it over.
If the view is correct, then you can at least convince rational people that it is not rational to kill people. Isn’t that an important result?
It’s important to us, but — as far as I can tell — only because of our values. I don’t think it’s important ‘to the universe’ for someone to refrain from going on a killing spree.
Another way to put it is that the rationality of killing sprees is dependent on the agent’s values. I haven’t read much of this site, but I’m getting the impression that a major project is to accept this...and figure out which initial values to give AI. Simply ensuring the AI will be rational is not enough to protect our values.
, besides pointing out that its current actions are suboptimal for its goal in the long run?
that sounds like a good rational argument to me. Is the paperclip maximiser supposed to
have a different rationality or just different values?
Another way to put it is that the rationality of killing sprees is dependent on the agent’s values. I haven’t read much of this site, but I’m getting the impression that a major project is to accept this...and figure out which initial values to give AI. Simply ensuring the AI will be rational is not enough to protect our values.
Like so much material on this site, that tacitly assumes values cannot be reasoned about.
I cannot provide [a murderer] compelling grounds as to why he ought not to have done what he did… [T]o punish him would be arbitrary.
If you don’t want murderers running around killing people, then it’s consistent with your values to set up a situation in which murderers can expect to be punished, and one way to do that is to actually punish murderers.
Yes, that’s arbitrary, in the same sense that every preference you have is arbitrary. If you are going to act upon your preferences without deceiving yourself, you have to feel comfortable with doing arbitrary things.
I think you missed the point quite badly there. The point is that there is no rationally
compelling reason to act on any arbitrary value. You gave the example of punishing murderers, but if every value is equally arbitrary that is no more justifiable than punishing stamp collectors or the left-handed. Having accepted moral subjectivism, you are faced with a choice between acting irrationality or not acting. OTOH, you haven’t exactly given moral objectivism a run for its money.
But we can tell him he did something wrong by the standard against randomly killing people. And we can act consistently with that standard by sanctioning him. In fact, it would be inconsistent for us to give him a pass.
I understand your point is that we can tell the killer that he has acted wrongly according to our standard (that one ought not randomly to kill people). But if people in general are bound only by their own standards, why should that matter to him? It seems to me I cannot provide him compelling grounds as to why he ought not to have done what he did, and that to punish him would be arbitrary.
I’m not sure I’m following the argument here. I’m saying that all normativity is hypothetical. It sounds like you’re arguing there is a categorical ‘ought’ for believing mathematical truths because it would be very strange to say we only ‘ought’ to believe 2 + 2 = 4 in reference to some goal. So if there are some categorical ‘oughts,’ there might be others.
Is it something like that?
This states the thought very clearly -thanks.
If so, then I would offer the goal of “in order to be logically consistent.”
I acknowledge the business about the nature of the compulsion behind mathematical judgement is pretty opaque. What I had in mind is illustrated by this dialogue. As it shows, the problem gets right back to the compulsion to be logically consistent. It’s possible this doesn’t really engage your thoughts, though. Some people I know think it’s just foolish.
There are some who think moral oughts reduce to logical consistency, so we ought act in a certain way in order to be logically consistent. I don’t have a good counter-argument to that, other than asking to examine such a theory and wondering how being able to point out a logical consistency is going to rein in people with desires that run counter to it any better than relativism can.
As is pointed out in the other thread from your post, plausibly our goal in the first instance is to show that it is rational not to kill people.
There’s an ambiguity here. A standard can make objective judgments, without the selection of that standard being objective. Like meter measurements.
I don’t think that works. If you have multiple contradictory judgements being made
by multiple standards, and you deem them to be objective, then you end up with multiple contradictoryobjective truths. But I don’t think you can have multiple contradictory objective truths.
But we can tell him he did something wrong by the standard against randomly killing people. And we can act consistently with that standard by sanctioning him.
You are tacitly assuming that the good guys are in the majority, However, sometimes the minority is in the right (as you and I would judge it), and need to persuade the majority to change their ways
I don’t have a good counter-argument to that, other than asking to examine such a theory and wondering how being able to point out a logical consistency is going to rein in people with desires that run counter to it any better than relativism can.
It″ll work on people who already subscribe to rationaity, whereas relativism won’t.
What’s contradictory about the same object being judged differently by different standards?
Here’s a standard: return the width of the object in meters.
Here’s another: return the number of wavelengths of blue light that make up the width of the object.
And another: return the number of electrons in the object.
You are tacitly assuming that the good guys are in the majority, However, sometimes the minority is in the right (as you and I would judge it), and need to persuade the majority to change their ways
What’s contradictory about the same object being judged differently by different standards?
Nothing. There’s nothing contradictory about multiple subjective truths or about multiple
opinions, or about a single objective truth. But there is a contradiction in multiple objective truths about morality, as I said.
Here’s a standard: return the width of the object in meters. Here’s another: return the number of wavelengths of blue light that make up the width of the object. And another: return the number of electrons in the object.
There isn’t any contradiction in multiple objective truths about different things; but the
original hypothesis was multiple objective truths about the same thing, ie the morality
of an action. If you are going to say that John-morality and Mary-morality are different things, that is effectively conceding that they are subjective.
If you are going to say that John-morality and Mary-morality are different things, that is effectively conceding that they are subjective.
The focus doesn’t have to be on John and Mary; it can be on the morality we’re referencing via John and Mary. By analogy, we could talk about John’s hometown and Mary’s hometown, without being subjectivists about the cities we are referencing.
Hmm. Sounds like it would be helpful to taboo “objective” and “subjective”. Or perhaps this is my fault for not being entirely clear.
A standard can be put into the form of sentences in formal logic, such that any formal reasoner starting from the axioms of logic will agree about the “judgements” of the standard.
I should mention that this point that I use the word “morality” to indicate a particular standard—the morality-standard—that has the properties we normally associate with morality (“approving” of happiness, “disapproving” of murder, etc). This is the standard I would endorse (by, for example, acting to maximise “good” according to it) were I fully rational and reflectively consistent and non-akrasiac.
So the judgements of other standards are not moral judgements in the sense that they are not statements about the output of this standard. There would indeed be something inconsistent about asserting that other standards made statements about—ie. had the same output as—this standard.
Given that, and assuming your objections about “subjectivity” still exist, what do you mean by “subjective” such that the existence of other standards makes morality “subjective”, and this a problem?
It already seems that you must be resigned to your arguments failing to work on some minds: there is no god that will strike you down if you write a paperclip-maximising AIXI, for example.
A standard can be put into the form of sentences in formal logic, such that any formal reasoner starting from the axioms of logic will agree about the “judgements” of the standard.
Yep. Subjective statements about X can be phrased in objectivese. But that doesn’t
make them objective statements about X.
Given that, and assuming your objections about “subjectivity” still exist, what do you mean by “subjective” such that the existence of other standards makes morality “subjective”, and this a problem?
By other standards do you mean other people’s moral standards, or non-moral (eg aesthetic standards)?
It already seems that you must be resigned to your arguments failing to work on some minds:
Of course. But I think moral objectivism is better as an explanation, because
it explains moral praise and blame as something other than a mistake; and
I think moral objectivism is also better in practice because having some
successful persuasion going on is better than having none.
Yep. Subjective statements about X can be phrased in objectivese. But that doesn’t make them objective statements about X.
I don’t know what you mean, if anything, by “subjective” and “objective” here, and what they are for.
By other standards do you mean other people’s moral standards, or non-moral (eg aesthetic standards)?
Okay… I think I’ll have to be more concrete. I’m going to exploit VNM-utility here, to make the conversation simpler. A standard is a utility function. That is, generally, a function that takes as input the state of the universe and produces as output a number. The only “moral” standard is the morality-standard I described previously. The rest of them are just standards, with no special names right now.
A mind, for example an alien, may be constructed such that it always executes the action that maximises the utility of some other standard. This utility function may be taken to be the “values” of the alien.
Moral praise and blame is not a mistake; whether certain actions result in an increase or decrease in the value of the moral utility function is a analytic fact. It is further an analytic fact that praise and blame, correctly applied, increases the output of the moral utility function, and that if we failed to do that, we would therefore fail to do the most moral thing.
I don’t know what you mean, if anything, by “subjective” and “objective” here, and what they are for.
By “subjective” I meant that it is indexed to an individual, and properly
so. If Mary thinks vanilla is nice, vanilla is nice-for-Mary, and there is
no further fact that can undermine the truth of that—whereas if
Mary thinks the world is flat, there may be some sense in which
it is flat-for-Mary, but that doens’t count for anything, because the
shape of the world is not something about which Mary has the last word.
By other standards do you mean other people’s moral standards, or non-moral (eg aesthetic standards)?
Okay… I think I’ll have to be more concrete. I’m going to exploit VNM-utility here, to make the conversation simpler. A standard is a utility function. That is, generally, a function that takes as input the state of the universe and produces as output a number. The only “moral” standard is the morality-standard I described previously. The rest of them are just standards, with no special names right now.
And there is one such standard in the universe, not one per agent?
By “subjective” I meant that it is indexed to an individual, and properly so. If Mary thinks vanilla is nice, vanilla is nice-for-Mary, and there is no further fact that can undermine the truth of that—whereas if Mary thinks the world is flat, there may be some sense in which it is flat-for-Mary, but that doens’t count for anything, because the shape of the world is not something about which Mary has the last word.
If Mary thinks the world is flat, she is asserting that a predicate holds of the earth. It turns out it doesn’t, so she is wrong. In the case of thinking vanilla is nice, there is no sensible niceness predicate, so we assume she’s using shorthand for nice_mary, which does exist, so she is correct. She might, however, get confused and think that nice_mary being true meant nice_x holds for all x, and use nice to mean that. If so, she would be wrong.
Okay then. An agent who thinks the morality-standard says something other than it does, is wrong, since statements about the judgements of the morality-standard are tautologically true.
And there is one such standard in the universe, not one per agent?
There is precisely one morality-standard.
Each (VNM-rational or potentially VNM-rational) agent contains a pointer to a standard—namely, the utility function the agent tries to maximise, or would try to maximise if they were rational. Most of these pointers within a light year of here will point to the morality-standard. A few of them will not. Outside of this volume there will be quite a lot of agents pointing to other standards.
If you have multiple contradictory judgements being made by multiple standards, and you deem them to be objective, then you end up with multiple contradictoryobjective truths. But I don’t think you can have multiple contradictory objective truths.
Ok, instead of meter measurements, let’s look at cubit measurements. Different ancient cultures represented significantly different physical lengths by ‘cubits.’ So a measurement of 10 cubits to a Roman was a different physical distance than 10 cubits to a Babylonian.
A given object could thus be ‘over ten cubits’ and ‘under ten cubits’ at the same time, though in different senses. Likewise, a given action can be ‘right’ and ‘wrong’ at the same time, though in different senses.
The surface judgments contradict, but there need not be any propositional conflict.
You are tacitly assuming that the good guys are in the majority, However, sometimes the minority is in the right (as you and I would judge it), and need to persuade the majority to change their ways
Isn’t this done by appealing to the values of the majority?
It″ll work on people who already subscribe to rationaity, whereas relativism won’t.
Only if — independent of values — certain values are rational and others are not.
Likewise, a given action can be ‘right’ and ‘wrong’ at the same time, though in different senses.
Are you sure that people mean different things by ‘right’ and ‘wrong’, or are they just using different criteria to judge whether something is right or wrong.
Isn’t this done by appealing to the values of the majority?
It’s done by changing the values of the majority..by showing the majority that
they ought (in a rational sense of ought)) think differently. The point being that if
correct reasoning eventually leads to uniform results, we call that objective.
Only if — independent of values — certain values are rational and others are not
Does it work or not? Have majorities not been persuaded that its wrong, if convenient, to oppress minorities?
Are you sure that people mean different things by ‘right’ and ‘wrong’, or are they just using different criteria to judge whether something is right or wrong.
What could ‘right’ and ‘wrong’ mean, beyond the criteria used to make the judgment?
It’s done by changing the values of the majority..by showing the majority that they ought (in a rational sense of ought)) think differently.
Sure, if you’re talking about appealing to people to change their non-fundamental values to be more in line with their fundamental values. But I’ve still never heard how reason can have anything to say about fundamental values.
Does it work or not? Have majorities not been persuaded that its wrong, if convenient, to oppress minorities?
So far as I can tell, only by reasoning from their pre-existing values.
What could ‘right’ and ‘wrong’ mean, beyond the criteria used to make the judgment?
“Should be rewarded” and “should be punished”. If there was evidence of people
saying that the good should be punished, that would indicate that some people
are disagreeing about the meaning of good/right. Otherwise, disagreements
are about criteria for assigning the term.
So far as I can tell, only by reasoning from their pre-existing values.
But not for all of them (since some of then get discarded) and not
only from moral values (since people need to value reason to be reasoned with).
There’s an ambiguity here. A standard can make objective judgments, without the selection of that standard being objective. Like meter measurements.
Such a person would be objectively afoul of a standard against randomly killing people. But let’s say he acted according to a standard which doesn’t care about that; we wouldn’t be able to tell him he did something wrong by that other standard. Nor could we tell him he did something wrong according to the one, correct standard (since there isn’t one).
But we can tell him he did something wrong by the standard against randomly killing people. And we can act consistently with that standard by sanctioning him. In fact, it would be inconsistent for us to give him a pass.
Unless A was just asking to be walked through the calculation steps, then I agree B is not answering A’s question.
I’m not sure I’m following the argument here. I’m saying that all normativity is hypothetical. It sounds like you’re arguing there is a categorical ‘ought’ for believing mathematical truths because it would be very strange to say we only ‘ought’ to believe 2 + 2 = 4 in reference to some goal. So if there are some categorical ‘oughts,’ there might be others.
Is it something like that?
If so, then I would offer the goal of “in order to be logically consistent.” There are some who think moral oughts reduce to logical consistency, so we ought act in a certain way in order to be logically consistent. I don’t have a good counter-argument to that, other than asking to examine such a theory and wondering how being able to point out a logical consistency is going to rein in people with desires that run counter to it any better than relativism can.
You can stop right there. If no theory of morality based on logical consistency is offered, you don’t have to do any more.
I suppose you mean “if no theory of morality based on logical consistency is offered”.
Of course, one could make an attempt to research reason-based metaethics before discarding the whole idea.
Agreed and edited.
I observe that you didn’t offer a pointer to a theory of morality based on logical consistency.
I agree with Eby: you are a troll. I’m done here.
For one thing, I don’t think logical consistency is quite the right criterion for reason-based objective morality. Pointing out that certain ideas are old and well documented, is offering a pointer, and is not trolling.
I understand your point is that we can tell the killer that he has acted wrongly according to our standard (that one ought not randomly to kill people). But if people in general are bound only by their own standards, why should that matter to him? It seems to me I cannot provide him compelling grounds as to why he ought not to have done what he did, and that to punish him would be arbitrary. Sorry if I’m not getting it.
This states the thought very clearly -thanks.
I acknowledge the business about the nature of the compulsion behind mathematical judgement is pretty opaque. What I had in mind is illustrated by this dialogue. As it shows, the problem gets right back to the compulsion to be logically consistent. It’s possible this doesn’t really engage your thoughts, though.
If the view is correct, then you can at least convince rational people that it is not rational to kill people. Isn’t that an important result?
When a dispute is over fundamental values, I don’t think we can give the other side compelling grounds to act according to our own values. Consider Eliezer’s paperclip maximizer. How could we possibly convince such a being that it’s doing something irrational, besides pointing out that its current actions are suboptimal for its goal in the long run?
Thanks for the link to the Carroll story. I plan on taking some time to think it over.
It’s important to us, but — as far as I can tell — only because of our values. I don’t think it’s important ‘to the universe’ for someone to refrain from going on a killing spree.
Another way to put it is that the rationality of killing sprees is dependent on the agent’s values. I haven’t read much of this site, but I’m getting the impression that a major project is to accept this...and figure out which initial values to give AI. Simply ensuring the AI will be rational is not enough to protect our values.
that sounds like a good rational argument to me. Is the paperclip maximiser supposed to have a different rationality or just different values?
Like so much material on this site, that tacitly assumes values cannot be reasoned about.
If you don’t want murderers running around killing people, then it’s consistent with your values to set up a situation in which murderers can expect to be punished, and one way to do that is to actually punish murderers.
Yes, that’s arbitrary, in the same sense that every preference you have is arbitrary. If you are going to act upon your preferences without deceiving yourself, you have to feel comfortable with doing arbitrary things.
I think you missed the point quite badly there. The point is that there is no rationally compelling reason to act on any arbitrary value. You gave the example of punishing murderers, but if every value is equally arbitrary that is no more justifiable than punishing stamp collectors or the left-handed. Having accepted moral subjectivism, you are faced with a choice between acting irrationality or not acting. OTOH, you haven’t exactly given moral objectivism a run for its money.
I understand your point is that we can tell the killer that he has acted wrongly according to our standard (that one ought not randomly to kill people). But if people in general are bound only by their own standards, why should that matter to him? It seems to me I cannot provide him compelling grounds as to why he ought not to have done what he did, and that to punish him would be arbitrary.
This states the thought very clearly -thanks.
I acknowledge the business about the nature of the compulsion behind mathematical judgement is pretty opaque. What I had in mind is illustrated by this dialogue. As it shows, the problem gets right back to the compulsion to be logically consistent. It’s possible this doesn’t really engage your thoughts, though. Some people I know think it’s just foolish.
As is pointed out in the other thread from your post, plausibly our goal in the first instance is to show that it is rational not to kill people.
I don’t think that works. If you have multiple contradictory judgements being made by multiple standards, and you deem them to be objective, then you end up with multiple contradictoryobjective truths. But I don’t think you can have multiple contradictory objective truths.
You are tacitly assuming that the good guys are in the majority, However, sometimes the minority is in the right (as you and I would judge it), and need to persuade the majority to change their ways
It″ll work on people who already subscribe to rationaity, whereas relativism won’t.
What’s contradictory about the same object being judged differently by different standards?
Here’s a standard: return the width of the object in meters. Here’s another: return the number of wavelengths of blue light that make up the width of the object. And another: return the number of electrons in the object.
No Universally Compelling Arguments seems relevant here.
You realize that the linked post applies to arguments about mathematics or physics just as much as about morality.
Nothing. There’s nothing contradictory about multiple subjective truths or about multiple opinions, or about a single objective truth. But there is a contradiction in multiple objective truths about morality, as I said.
There isn’t any contradiction in multiple objective truths about different things; but the original hypothesis was multiple objective truths about the same thing, ie the morality of an action. If you are going to say that John-morality and Mary-morality are different things, that is effectively conceding that they are subjective.
The focus doesn’t have to be on John and Mary; it can be on the morality we’re referencing via John and Mary. By analogy, we could talk about John’s hometown and Mary’s hometown, without being subjectivists about the cities we are referencing.
That isn’t analogous, because towns aren;’t epistemic.
Hmm. Sounds like it would be helpful to taboo “objective” and “subjective”. Or perhaps this is my fault for not being entirely clear.
A standard can be put into the form of sentences in formal logic, such that any formal reasoner starting from the axioms of logic will agree about the “judgements” of the standard.
I should mention that this point that I use the word “morality” to indicate a particular standard—the morality-standard—that has the properties we normally associate with morality (“approving” of happiness, “disapproving” of murder, etc). This is the standard I would endorse (by, for example, acting to maximise “good” according to it) were I fully rational and reflectively consistent and non-akrasiac.
So the judgements of other standards are not moral judgements in the sense that they are not statements about the output of this standard. There would indeed be something inconsistent about asserting that other standards made statements about—ie. had the same output as—this standard.
Given that, and assuming your objections about “subjectivity” still exist, what do you mean by “subjective” such that the existence of other standards makes morality “subjective”, and this a problem?
It already seems that you must be resigned to your arguments failing to work on some minds: there is no god that will strike you down if you write a paperclip-maximising AIXI, for example.
Yep. Subjective statements about X can be phrased in objectivese. But that doesn’t make them objective statements about X.
By other standards do you mean other people’s moral standards, or non-moral (eg aesthetic standards)?
Of course. But I think moral objectivism is better as an explanation, because it explains moral praise and blame as something other than a mistake; and I think moral objectivism is also better in practice because having some successful persuasion going on is better than having none.
I don’t know what you mean, if anything, by “subjective” and “objective” here, and what they are for.
Okay… I think I’ll have to be more concrete. I’m going to exploit VNM-utility here, to make the conversation simpler. A standard is a utility function. That is, generally, a function that takes as input the state of the universe and produces as output a number. The only “moral” standard is the morality-standard I described previously. The rest of them are just standards, with no special names right now.
A mind, for example an alien, may be constructed such that it always executes the action that maximises the utility of some other standard. This utility function may be taken to be the “values” of the alien.
Moral praise and blame is not a mistake; whether certain actions result in an increase or decrease in the value of the moral utility function is a analytic fact. It is further an analytic fact that praise and blame, correctly applied, increases the output of the moral utility function, and that if we failed to do that, we would therefore fail to do the most moral thing.
By “subjective” I meant that it is indexed to an individual, and properly so. If Mary thinks vanilla is nice, vanilla is nice-for-Mary, and there is no further fact that can undermine the truth of that—whereas if Mary thinks the world is flat, there may be some sense in which it is flat-for-Mary, but that doens’t count for anything, because the shape of the world is not something about which Mary has the last word.
And there is one such standard in the universe, not one per agent?
If Mary thinks the world is flat, she is asserting that a predicate holds of the earth. It turns out it doesn’t, so she is wrong. In the case of thinking vanilla is nice, there is no sensible niceness predicate, so we assume she’s using shorthand for nice_mary, which does exist, so she is correct. She might, however, get confused and think that nice_mary being true meant nice_x holds for all x, and use nice to mean that. If so, she would be wrong.
Okay then. An agent who thinks the morality-standard says something other than it does, is wrong, since statements about the judgements of the morality-standard are tautologically true.
There is precisely one morality-standard.
Each (VNM-rational or potentially VNM-rational) agent contains a pointer to a standard—namely, the utility function the agent tries to maximise, or would try to maximise if they were rational. Most of these pointers within a light year of here will point to the morality-standard. A few of them will not. Outside of this volume there will be quite a lot of agents pointing to other standards.
Ok, instead of meter measurements, let’s look at cubit measurements. Different ancient cultures represented significantly different physical lengths by ‘cubits.’ So a measurement of 10 cubits to a Roman was a different physical distance than 10 cubits to a Babylonian.
A given object could thus be ‘over ten cubits’ and ‘under ten cubits’ at the same time, though in different senses. Likewise, a given action can be ‘right’ and ‘wrong’ at the same time, though in different senses.
The surface judgments contradict, but there need not be any propositional conflict.
Isn’t this done by appealing to the values of the majority?
Only if — independent of values — certain values are rational and others are not.
Are you sure that people mean different things by ‘right’ and ‘wrong’, or are they just using different criteria to judge whether something is right or wrong.
It’s done by changing the values of the majority..by showing the majority that they ought (in a rational sense of ought)) think differently. The point being that if correct reasoning eventually leads to uniform results, we call that objective.
Does it work or not? Have majorities not been persuaded that its wrong, if convenient, to oppress minorities?
What could ‘right’ and ‘wrong’ mean, beyond the criteria used to make the judgment?
Sure, if you’re talking about appealing to people to change their non-fundamental values to be more in line with their fundamental values. But I’ve still never heard how reason can have anything to say about fundamental values.
So far as I can tell, only by reasoning from their pre-existing values.
“Should be rewarded” and “should be punished”. If there was evidence of people saying that the good should be punished, that would indicate that some people are disagreeing about the meaning of good/right. Otherwise, disagreements are about criteria for assigning the term.
But not for all of them (since some of then get discarded) and not only from moral values (since people need to value reason to be reasoned with).