When you see the word “morals” used without further clarification, do you take it to mean something different from “values” or “terminal goals”?
Depends on context.
When I use it, it means something kind of like “what we want to happen.” More precisely, I treat moral principles as sort keys for determining the preference order of possible worlds. When I say that X is morally superior to Y, I mean that I prefer worlds with more X in them (all else being equal) to worlds with more Y in them.
I know other people who, when they use it, mean something kind of like that, if not quite so crisply, and I understand them that way.
I know people who, when they use it, mean something more like “complying with the rules tagged ‘moral’ in the social structure I’m embedded in.” I know people who, when they use it, mean something more like “complying with the rules implicit in the nonsocial structure of the world.” In both cases, I try to understand by it what I expect them to mean.
“Morals” and “goals” are very different things. I might make it a goal to (say) steal an apple from a shop; this would be an example of an immoral goal. Or I might make a goal to (say) give some money to charity; this would be a moral goal. Or I might make a goal to buy a book; this would (usually) be a goal with little if any moral weight one way or another.
Morality cannot be the same as terminal goals, because a terminal goal can also be immoral, and someone can pursue a terminal goal while knowing it’s immoral.
AI morals are not a category error; if an AI deliberately kills someone, then that carries the same moral weight as if a person deliberately kills someone.
I see morality as fundamentally a way of dealing with conflicts between values/goals, so I cant answer questions posed in terms of “our values”, because I don’t know whether that means a set of identical values, a set of non-identical but non conflicting values, or a set of conflicting values. One of the implications of that view is that some values/goals are automatically morally irrelevant , since they can be satisfied without potential conflict. Another implication is that my view approximates to “morality is society’s rules”, but without the dismissive implication..if a society as gone through a process of formulating rules that are effective at reducing conflict, then there is a non-vacuous sense in which that society’s morality is its rules. Also AI and alien morality are perfectly feasible, and possibly even necessary.
Some people think that any value, if it is the only value, naturally tries to consume all available resources. Even if you explicitly make a satisficing, non-maximizing value (e.g. “make 1000 paperclips”, not just “make paperclips”), a rational agent pursuing that value may consume infinite resources making more paperclips just in case it’s somehow wrong about already having made 1000 of them, or in case some of the ones it has made are destroyed.
On this view, all values need to be able to trade off one another (which implies a common quantitative utility measurement). Even if it seems obvious that the chance you’re wrong about having made 1000 paperclips is very small, and you shouldn’t invest more resources in that instead of working on your next value, this needs to be explicit and quantified.
In this case, since all values inherently conflict with one another, all decisions (between actions that would serve different values) are moral decisions in your terms. I think this is a good intuition pump for why some people think all actions and all decisions are necessarily moral.
Ingenious. However, I can easily get round it by adding the rider that morality as concerned with conflicts between individuals. As stated, that is glib, but it can be motivated. Conflicts between individuals, in the absence of rules about how to distribute resources) are destructive, leading to waste of resources. (yes, I can predict the importance of various kinds of “fairness” to morality”). Conflicts within individuals much less so. Conflicts aren’t a problem because they are conflicts, they are a problem because of their possible consequences.
I’m not sure what you mean by conflict between individuals.
If you mean actual conflict like arguing or fighting, then choosing between donating to save five hungry people in Africa vs. two hungry people in South America isn’t a moral choice if nobody can observe your online purchases (let alone counterfactual ones) and develop a conflict with you. Someone who secretly invents a way cure for cancer doesn’t have moral reasons to cure others because they don’t know he can and are not in conflict with him.
If you mean conflict between individuals’ own values, where each hungry person wants you to save them, then every single decision is moral because there are always people who’d prefer you give them your money instead of doing anything else with it, and there are probably people who want you dead as a member of a nationality, ethnicity or religion. Apart from the unpleasant implications of this variant of utilitarianism, you didn’t want to label all decisions as moral.
I am not taking charity to be a central example of ethics.
Charity, societal improvement,etc are not centrally ethical, because the dimension of obligation is missing. It is obligatory to refrain from murder, but supererogatory to give to charity. Charity is not completely divorced
from ethics, because gaining better outcomes is the obvious flipside
of avoiding worse outcomes, but it does not have every component that
which is centrally ethical.
Not all value is morally relevant. Some preferences can be satisfied without impacting anybody else, preferences for flavours of ice cream being the classic example, and these are morally irrelevant. On the other had, my preference for loud music is likely to impinge on my neighbour’s preference for a good nights sleep: those preferences have a potential for conflict.
Charity and altrusim are part of ethics, but not central to ethics. A peaceful and prosperous society is in a position to consider how best to allocate its spare resources (and utiliariansim is helpful here, without being a full theory of ethics), but peace and prosperity are themselves the outcome a functioning ethics, not things that can be taken for granted. Someone who treats charity as the outstanding issue in ethics is, as it were, looking at
the visible 10% of the iceberg while ignoring the 90% that supports it.
If you mean conflict between individuals’ own values,
I mean destructive conflict.
Consider two stone age tribes. When a hunter of tribe A returns with a
deer, everyone falls on it, trying to grab as much as possible, and end up fighting and killing each other. When the same thing happens in tribe b, they apportion the kill in an orderly fashion according to
a predefined rule. All other things being equal, tribe B will do better
than tribe A: they are in possession of a useful piece of social technology.
If I don’t use “moral” as a rubber stamp for all and any human value, you don’t run into CCCs problem of labeling theft and murder as moral because some people value them. That’s the upside. Whats the downside?
What they did was clearly wrong… but, at the same time, they did not know it, and that has relevance.
Consider; you are given a device with a single button. You push the button and a hamburger appears. This is repeatable; every time you push the button, a hamburger appears. To the best of your knowledge, this is the only effect of pushing the button. Pushing the button therefore does not make you an immoral person; pushing the button several times to produce enough hamburgers to feed the hungry would, in fact, be the action of a moral person.
The above paragraph holds even if the device also causes lightning to strike a different person in China every time you press the button. (Although, in this case, creating the device was presumably an immoral act).
So, back to the babyeaters; some of their actions were immoral, but they themselves were not immoral, due to their ignorance.
Clearly I should have asked about actions rather than people. But the Babyeaters were not ignorant that they were causing great pain and emotional distress. They may not have known how long it continued, but none of the human characters IIRC suggested this information might change their minds. Because those aliens had a genetic tendency towards non-human preferences, and the (working) society they built strongly reinforced this.
...I notice that, while they were not ignorant that they were causing pain and emotional distress, they did honestly believe that they were doing the best thing and, indeed, even made a genuine attempt to persuade humanity, from first principles, that this was the right and good thing to do.
So they were doing, at all times, the action which they believed to by most moral, and were apparently willing to at least hear out contrary arguments. I still maintain, therefore, that their actions were immoral but they themselves were not; they made a genuine attempt to be moral to the best of their ability.
I’ve been told that people use the word “morals” to mean different things. Please answer this poll or add comments to help me understand better.
When you see the word “morals” used without further clarification, do you take it to mean something different from “values” or “terminal goals”?
[pollid:1165]
When you see the word “morals” used without further clarification, do you take it to mean something different from “values” or “terminal goals”?
Depends on context.
When I use it, it means something kind of like “what we want to happen.” More precisely, I treat moral principles as sort keys for determining the preference order of possible worlds. When I say that X is morally superior to Y, I mean that I prefer worlds with more X in them (all else being equal) to worlds with more Y in them.
I know other people who, when they use it, mean something kind of like that, if not quite so crisply, and I understand them that way.
I know people who, when they use it, mean something more like “complying with the rules tagged ‘moral’ in the social structure I’m embedded in.” I know people who, when they use it, mean something more like “complying with the rules implicit in the nonsocial structure of the world.” In both cases, I try to understand by it what I expect them to mean.
Survey assumed a consequentialist utilitarian moral framework. My moral philosophy is neither, so there was no adequate answer.
“Morals” and “goals” are very different things. I might make it a goal to (say) steal an apple from a shop; this would be an example of an immoral goal. Or I might make a goal to (say) give some money to charity; this would be a moral goal. Or I might make a goal to buy a book; this would (usually) be a goal with little if any moral weight one way or another.
Morality cannot be the same as terminal goals, because a terminal goal can also be immoral, and someone can pursue a terminal goal while knowing it’s immoral.
AI morals are not a category error; if an AI deliberately kills someone, then that carries the same moral weight as if a person deliberately kills someone.
I see morality as fundamentally a way of dealing with conflicts between values/goals, so I cant answer questions posed in terms of “our values”, because I don’t know whether that means a set of identical values, a set of non-identical but non conflicting values, or a set of conflicting values. One of the implications of that view is that some values/goals are automatically morally irrelevant , since they can be satisfied without potential conflict. Another implication is that my view approximates to “morality is society’s rules”, but without the dismissive implication..if a society as gone through a process of formulating rules that are effective at reducing conflict, then there is a non-vacuous sense in which that society’s morality is its rules. Also AI and alien morality are perfectly feasible, and possibly even necessary.
Some people think that any value, if it is the only value, naturally tries to consume all available resources. Even if you explicitly make a satisficing, non-maximizing value (e.g. “make 1000 paperclips”, not just “make paperclips”), a rational agent pursuing that value may consume infinite resources making more paperclips just in case it’s somehow wrong about already having made 1000 of them, or in case some of the ones it has made are destroyed.
On this view, all values need to be able to trade off one another (which implies a common quantitative utility measurement). Even if it seems obvious that the chance you’re wrong about having made 1000 paperclips is very small, and you shouldn’t invest more resources in that instead of working on your next value, this needs to be explicit and quantified.
In this case, since all values inherently conflict with one another, all decisions (between actions that would serve different values) are moral decisions in your terms. I think this is a good intuition pump for why some people think all actions and all decisions are necessarily moral.
Ingenious. However, I can easily get round it by adding the rider that morality as concerned with conflicts between individuals. As stated, that is glib, but it can be motivated. Conflicts between individuals, in the absence of rules about how to distribute resources) are destructive, leading to waste of resources. (yes, I can predict the importance of various kinds of “fairness” to morality”). Conflicts within individuals much less so. Conflicts aren’t a problem because they are conflicts, they are a problem because of their possible consequences.
I’m not sure what you mean by conflict between individuals.
If you mean actual conflict like arguing or fighting, then choosing between donating to save five hungry people in Africa vs. two hungry people in South America isn’t a moral choice if nobody can observe your online purchases (let alone counterfactual ones) and develop a conflict with you. Someone who secretly invents a way cure for cancer doesn’t have moral reasons to cure others because they don’t know he can and are not in conflict with him.
If you mean conflict between individuals’ own values, where each hungry person wants you to save them, then every single decision is moral because there are always people who’d prefer you give them your money instead of doing anything else with it, and there are probably people who want you dead as a member of a nationality, ethnicity or religion. Apart from the unpleasant implications of this variant of utilitarianism, you didn’t want to label all decisions as moral.
I am not taking charity to be a central example of ethics.
Charity, societal improvement,etc are not centrally ethical, because the dimension of obligation is missing. It is obligatory to refrain from murder, but supererogatory to give to charity. Charity is not completely divorced from ethics, because gaining better outcomes is the obvious flipside of avoiding worse outcomes, but it does not have every component that which is centrally ethical.
Not all value is morally relevant. Some preferences can be satisfied without impacting anybody else, preferences for flavours of ice cream being the classic example, and these are morally irrelevant. On the other had, my preference for loud music is likely to impinge on my neighbour’s preference for a good nights sleep: those preferences have a potential for conflict.
Charity and altrusim are part of ethics, but not central to ethics. A peaceful and prosperous society is in a position to consider how best to allocate its spare resources (and utiliariansim is helpful here, without being a full theory of ethics), but peace and prosperity are themselves the outcome a functioning ethics, not things that can be taken for granted. Someone who treats charity as the outstanding issue in ethics is, as it were, looking at the visible 10% of the iceberg while ignoring the 90% that supports it.
I mean destructive conflict.
Consider two stone age tribes. When a hunter of tribe A returns with a deer, everyone falls on it, trying to grab as much as possible, and end up fighting and killing each other. When the same thing happens in tribe b, they apportion the kill in an orderly fashion according to a predefined rule. All other things being equal, tribe B will do better than tribe A: they are in possession of a useful piece of social technology.
Thank you, your point is well taken.
Were the Babyeaters immoral before meeting humans?
If not, what would you like to call the thing we actually care about?
If I don’t use “moral” as a rubber stamp for all and any human value, you don’t run into CCCs problem of labeling theft and murder as moral because some people value them. That’s the upside. Whats the downside?
What they did was clearly wrong… but, at the same time, they did not know it, and that has relevance.
Consider; you are given a device with a single button. You push the button and a hamburger appears. This is repeatable; every time you push the button, a hamburger appears. To the best of your knowledge, this is the only effect of pushing the button. Pushing the button therefore does not make you an immoral person; pushing the button several times to produce enough hamburgers to feed the hungry would, in fact, be the action of a moral person.
The above paragraph holds even if the device also causes lightning to strike a different person in China every time you press the button. (Although, in this case, creating the device was presumably an immoral act).
So, back to the babyeaters; some of their actions were immoral, but they themselves were not immoral, due to their ignorance.
Clearly I should have asked about actions rather than people. But the Babyeaters were not ignorant that they were causing great pain and emotional distress. They may not have known how long it continued, but none of the human characters IIRC suggested this information might change their minds. Because those aliens had a genetic tendency towards non-human preferences, and the (working) society they built strongly reinforced this.
Hmmm. I had to go back and re-read the story.
...I notice that, while they were not ignorant that they were causing pain and emotional distress, they did honestly believe that they were doing the best thing and, indeed, even made a genuine attempt to persuade humanity, from first principles, that this was the right and good thing to do.
So they were doing, at all times, the action which they believed to by most moral, and were apparently willing to at least hear out contrary arguments. I still maintain, therefore, that their actions were immoral but they themselves were not; they made a genuine attempt to be moral to the best of their ability.