I 100% agree that there is no ghostly essence of goodness.
I agree that pursuing amoral, or even immoral, values can still lead to moral results. (And also vice-versa.)
I agree that if I somehow knew what was moral and what wasn’t, then I would have a basis for formally distinguishing my moral values from my non-moral values even when my intuitions failed. I could even, in principle, build an automated mechanism for judging things as moral or non-moral. (Similarly, if a Pebblesorter knew that primeness was what it valued and knew how to factor large numbers, it would have a basis for formally distinguishing piles it valued from piles it didn’t value even when its intuitive judgments failed, and it could build an automated mechanism for distinguishing such piles.)
I agree with you that real people who actually exist can’t do this, at least not in detail.
You suggest we can divide morality into subconcepts that comprise it (freedom, happiness, fairness, etc.) and that it excludes (anhedonia, etc.). What I continue to not get is, on your account, how I do that in such a way as to ensure that what I end up with is the objectively correct list of moral values, which on your account exists, rather than some different list of values.
That is, suppose Sam and George both go through this exercise, and one of them ends up with “freedom” on their list but not “cooperation”, and the other ends up with “cooperation” but not “freedom.” On your account it seems clear that at least one of them is wrong, because the correct list of moral values is objective.
So, OK… what would we expect to experience if Sam were right? How does that differ from what we would expect to experience if George were right, or if neither of them were?
Do you want people to be happy, free, be treated fairly, etc? Then you value morality to some extent.
Again: how do we know that? What would I expect to experience differently if, instead, happiness, freedom, fairness, etc. turned out not to be aspects of morality, just like maximizing paperclips does? What should I be looking for, to notice if this is true, or confirm that it isn’t? I would still want people to be happy, free, be treated fairly, etc. in either case, after all. What differences would I experience between the two cases?
If you instead use the word “caring” to mean “have values that assign different levels of desirability to various possible states that the world could be in”
Yes, that’s more or less what I mean by “caring”. More precisely I would say that caring about X consists of desiring states of the world with more X more than states of the world with less X, all else being equal, but that’s close enough to what you said.
If by “valuable” you mean “has more of the things that I care about,” then yes, you could say that. Remember, however, that in that case what is “valuable” is subjective, it changes from person to person depending on their individual utility functions.
Yes, that’s what I mean by “valuable.” And yes, absolutely, what is valuable changes from person to person. If I act to maximize my values and you act to maximize yours we might act in opposition (or we might not, depending, but it’s possible).
And I get that you want to say that if we both gave up maximizing our values and instead agreed to implement moral values, then we would be cooperating instead, and the world would be better (even if it turned out that both of us found it less valuable). What I’m asking you is how (even in principle) we could ever reach that point.
To say that a little differently: you value some things (Vg) and I value some things (Vd). Supposing we are both perfectly rational and honest and etc., we can both know what Vg and Vd are, and what events in the world would maximize each. We can agree to cooperate on maximizing the intersection of (Vg,Vd), and we can work out some pragmatic compromise about the non-overlapping stuff. So far so good; I see how we could in principle reach that point, even if in practice we aren’t rational or self-aware or honest enough to do it.
But I don’t see how we could ever say “There’s this other list, Vm, of moral values; let’s ignore Vg and Vd altogether and instead implement Vm!” because I don’t see how we could ever know what Vm was, even in principle. If we happened to agree on some list Vm, either by coincidence or due to social conditioning or for other reasons, we could agree to implement Vm… which might or might not make the world better, depending on whether Vm happened to be the objectively correct list of moral values. But I don’t see how we could ever, even in principle, confirm or deny this, or correct it if we somehow came to know we had the objectively wrong list.
And if we can’t know or confirm or deny or correct it, even in principle, then I don’t see what is added by discussing it. It seems to me I can just as usefully say, in this case, “I value happiness, freedom, fairness, etc. I will act to maximize those values, and I endorse acting this way,” and nothing is added by saying “Those values comprise morality” except that I’ve asserted a privileged social status for my values.
So, OK… what would we expect to experience if Sam were right? How does that differ from what we would expect to experience if George were right, or if neither of them were?......Again: how do we know that? What would I expect to experience differently if, instead, happiness, freedom, fairness, etc. turned out not to be aspects of morality, just like maximizing paperclips does?
Well, I am basically asserting that morality is some sort of objective equation, or “abstract idealized dynamic,” as Eliezer calls it, concerned with people’s wellbeing. And I am further asserting that most human beings care very much about this concept. I think this would make the following predictions:
In a situation where a given group of humans had similar levels of empirical knowledge and a similar sanity waterline there would be far more moral agreement among them than would be predicted by chance, and far less moral disagreement than is mentally possible.
It is physically possible to persuade people to change their moral values by reasoned argument.
Inhabitants of a society who are unusually rational and intelligent will be the first people in that society to make moral progress, as they will be better at extrapolating answers out of the “equation.”
If one attempted to convert the moral computations people make into an abstract, idealized process, and determine it’s results, many people would find those results at least somewhat persuasive, and may find their ethical views changed by observing them.
All of these predictions appear to be true:
Human societies tend to have a rather high level of moral agreement between their members. Conformity is not necessarily an indication of rightness, it seems fairly obvious that whole societies have held gravely mistaken moral views, such as those that believed slavery was good. However, it is interesting that all those people in those societies were mistaken in exactly the same way. That seems like evidence that they were all reasoning towards similar conclusions, and the mistakes they made were caused by common environmental factors that impacted all of them. There are other theories that explain this data of course, (peer pressure, for instance), but I still find it striking.
I’ve had moral arguments made by other people change my mind, and changed the minds of other people by moral argument. I’m sure you have also had this experience.
It is well known that intellectuals tend to develop and adopt new moral theories before the general populace does. Common examples of intellectuals whose moral concepts have disseminated into the general populace include John Locke, Jeremy Bentham, and William Lloyd Garrison. Many of these peoples’ principles have since been adopted into the public consciousness.
Ethical theorists who have attempted to derive new ethical principles by working from an abstract, idealized form of ethics have often been very persuasive. To name just one example, Peter Singer ended up turning thousands of people into vegetarians with moral arguments that started on a fairly abstract level.
It seems to me I can just as usefully say, in this case, “I value happiness, freedom, fairness, etc. I will act to maximize those values, and I endorse acting this way,” and nothing is added by saying “Those values comprise morality”
Asserting that those values comprise morality seems to be effective because it seems to most people that those values are related in some way, because they form the superconcept “morality.” Morality is a useful catchall term for certain types of values, and it would be a shame to lose it.
Still, I suppose that asserting “I value happiness, freedom, fairness, etc” is similar enough to saying “I care about morality” that I really can’t object terribly strongly if that’s what you’d prefer to do.
except that I’ve asserted a privileged social status for my values.
Why does doing that bother you? Presumably, because you care about the moral concept of fairness, and don’t want to claim an unfair level of status for you and your views. But does it really make sense to say “I care about fairness, but I want to be fair to other people who don’t care about it, so I’ll go ahead and let them treat people unfairly, in order to be fair.” That sounds silly, doesn’t it? it has the same problems that come with being tolerant of intolerant people.
I think this would make the following predictions:
All of those predictions seem equally likely to me whether Sam is right or George is, so don’t really engage with my question at all. At this point, after several trips ’round the mulberry bush, I conclude that this is not because I’m being unclear with my question but rather because you’re choosing not to answer it, so I will stop trying to clarify the question further.
If I map your predictions and observations to the closest analogues that make any sense to me at all, I basically agree with them.
I suppose that asserting “I value happiness, freedom, fairness, etc” is similar enough to saying “I care about morality” that I really can’t object terribly strongly if that’s what you’d prefer to do.
It is.
Why does doing that [asserting a privileged social status for my values] bother you?
It doesn’t bother me; it’s a fine thing to do under some circumstances. If we can agree that that’s what we’re doing when we talk about “objective morality,” great. If not (which I find more likely), never mind.
Presumably, because you care about the moral concept of fairness, and don’t want to claim an unfair level of status for you and your views.
As above, I don’t see what the word “moral” is adding to this sentence. But sure, unfairly claiming status bothers me to the extent that I care about fairness. (That said, I don’t think claiming status by describing my values as “moral” is unfair; pretty much everybody has an equal ability to do it, and indeed they do. I just think it confuses any honest attempt at understanding what’s really going on when we decide on what to do.)
But does it really make sense to say “I care about fairness, but I want to be fair to other people who don’t care about it, so I’ll go ahead and let them treat people unfairly, in order to be fair.”
It depends on why and how I value (“care about”) fairness.
If I value it instrumentally (which I do), then it makes perfect sense to say that being fair to people who treat others unfairly is net-valuable, although it might be true or false in any given situation depending on what is achieved by the various kind of fairness that exist in tension in that situation.
Similarly, if I value it in proportion to how much of it there is (which I do), then it makes sense to say that, although it might be true or false depending on how much fairness is gained or lost by doing so.
That sounds silly, doesn’t it?
(nods) Totally. And the ability to phrase ideas in silly-sounding ways is valuable for rhetorical purposes, although it isn’t worth much as an analytical tool.
All of those predictions seem equally likely to me whether Sam is right or George is, so don’t really engage with my question at all.
I’m really sorry, I was trying to kill two birds with one stone and simultaneously engage that question and your later question [“What would I expect to experience differently if, instead, happiness, freedom, fairness, etc. turned out not to be aspects of morality, just like maximizing paperclips does?”] at the same time, and I ended up doing a crappy job of answering both of them. I’ll try to just answer the Sam and George question now.
I’ll start by examining the Pebblesorters P-George and P-Sam. P-George thinks 9 is p-right and 16 is p-wrong. P-Sam thinks 9 is p-wrong and 16 is p-right. They both think they are using the word “p-right” to refer to the same abstract, idealized process. What can they do to see which one is right?
They assume that most other Pebblesorters care about the same abstract process they do, so they can try to persuade them and see how successful they are. Of course, even if all the Pebblesorters agree with one of them, that doesn’t necessarily mean that one is p-correct, those sorters may be making the same mistake as P-George or P-Sam. But I think it’s non-zero Bayesian evidence of the p-rightness of their views.
They can try to control for environmentally caused error by seeing if they can also persuade Pebblesorters who live in different environments and cultures.
They can find the most rational and p-sane Pebblesorting societies and see if they have an easier time persuading them.
They can actually try to extrapolate what the abstract, idealized equation that the word “p-right” represents is and compare it to their views. They read up on Pebblesorter philospher’s theories of p-rightness and see how they correlate with their views. Pebblesorting is much simpler than morality, so we know that the abstract, idealized dynamic that the concept “p-right” represents is “primality.” So we know that P-Sam and P-George are both partly right and partly wrong, 9 and 16 both aren’t prime.
Now let’s translate that into human.
We would expect if Sam was right and George was wrong:
He would have an easier time persuading non-sociopathic humans of the rightness of his views than George, because his views are closer to the results of the equation those people have in their head.
If he went around to different societies with different moral views and attempted to persuade the people there of his views he should, on average, also have an easier time of it than George, again because his views are closer to the results of the equation those people have in their head.
Societies with higher levels of sanity and rationality should be especially easily persuaded, because they are better at determining what the results of that equation would be.
When Sam compared his and George’s views to views generated by various attempts by philosophers to create an abstract idealized version of the equation (ie. moral theories), his view should be a better match to many of them, and the results they generate, than George’s are.
The problem is that the concept of morality is far more complex than the concept of primality, so finding the right abstract idealized equation is harder for humans than it is for Pebblesorters. We still haven’t managed to do it yet. But I think that comparing Sam and George’s views to the best approximations we have so far (various forms of consequentialism, in my view) we can get some Bayesian evidence of the rightness of their views.
If George is right, he will achieve these results instead of Sam. If they are both wrong, they will both fail at doing these things.
If I value it instrumentally (which I do), then it makes perfect sense to say that being fair to people who treat others unfairly is net-valuable, although it might be true or false in any given situation depending on what is achieved by the various kind of fairness that exist in tension in that situation.
Sorry, I was probably being unclear as to what I meant because I was trying to sound clever. When I said it was silly to be fair to unfair people what I meant was that you should not regard their advice on how to best treat other people with the same consideration you’d give to a fair-minded person’s advice.
For instance, you wouldn’t say “I think it’s wrong to enslave black people, but that guy over there thinks it’s right, so let’s compromise and believe it’s okay to enslave them 50% of the time.” I suppose you might pretend to believe that if the other guy had a gun and you didn’t, but you wouldn’t let his beliefs affect yours.
I did not mean that, for example, if you, two fair-minded people, and one unfair-minded person are lost in the woods and find a pie, that you shouldn’t give the unfair-minded person a quarter of the pie to eat. That is an instance where it does make sense to treat unfair people fairly.
OK. Thanks for engaging with the question; that was very helpful. I now have a much better understanding of what you believe the differences-in-practice between moral and non-moral values are.
Just to echo back what I’m hearing you say: to the extent that some set of values Vm is easier to convince humans to adopt than other sets of values and easier to convince sane, rational societies to adopt than less sane, less rational societies and better approximates the moral theories created by philosophers than other sets of values, to that extent we can be confident that Vm is the set of values that comprise morality.
Did I get that right?
Regarding fairmindedness: I endorse giving someone’s advice consideration to the extent that I’m confident that considering their advice will implement my values. And, sure, it’s unlikely that the advice of an unfairminded person would, if considered, implement the value of fairness.
I 100% agree that there is no ghostly essence of goodness.
I agree that pursuing amoral, or even immoral, values can still lead to moral results. (And also vice-versa.)
I agree that if I somehow knew what was moral and what wasn’t, then I would have a basis for formally distinguishing my moral values from my non-moral values even when my intuitions failed. I could even, in principle, build an automated mechanism for judging things as moral or non-moral. (Similarly, if a Pebblesorter knew that primeness was what it valued and knew how to factor large numbers, it would have a basis for formally distinguishing piles it valued from piles it didn’t value even when its intuitive judgments failed, and it could build an automated mechanism for distinguishing such piles.)
I agree with you that real people who actually exist can’t do this, at least not in detail.
You suggest we can divide morality into subconcepts that comprise it (freedom, happiness, fairness, etc.) and that it excludes (anhedonia, etc.). What I continue to not get is, on your account, how I do that in such a way as to ensure that what I end up with is the objectively correct list of moral values, which on your account exists, rather than some different list of values.
That is, suppose Sam and George both go through this exercise, and one of them ends up with “freedom” on their list but not “cooperation”, and the other ends up with “cooperation” but not “freedom.” On your account it seems clear that at least one of them is wrong, because the correct list of moral values is objective.
So, OK… what would we expect to experience if Sam were right? How does that differ from what we would expect to experience if George were right, or if neither of them were?
Again: how do we know that? What would I expect to experience differently if, instead, happiness, freedom, fairness, etc. turned out not to be aspects of morality, just like maximizing paperclips does? What should I be looking for, to notice if this is true, or confirm that it isn’t? I would still want people to be happy, free, be treated fairly, etc. in either case, after all. What differences would I experience between the two cases?
Yes, that’s more or less what I mean by “caring”. More precisely I would say that caring about X consists of desiring states of the world with more X more than states of the world with less X, all else being equal, but that’s close enough to what you said.
Yes, that’s what I mean by “valuable.” And yes, absolutely, what is valuable changes from person to person. If I act to maximize my values and you act to maximize yours we might act in opposition (or we might not, depending, but it’s possible).
And I get that you want to say that if we both gave up maximizing our values and instead agreed to implement moral values, then we would be cooperating instead, and the world would be better (even if it turned out that both of us found it less valuable). What I’m asking you is how (even in principle) we could ever reach that point.
To say that a little differently: you value some things (Vg) and I value some things (Vd). Supposing we are both perfectly rational and honest and etc., we can both know what Vg and Vd are, and what events in the world would maximize each. We can agree to cooperate on maximizing the intersection of (Vg,Vd), and we can work out some pragmatic compromise about the non-overlapping stuff. So far so good; I see how we could in principle reach that point, even if in practice we aren’t rational or self-aware or honest enough to do it.
But I don’t see how we could ever say “There’s this other list, Vm, of moral values; let’s ignore Vg and Vd altogether and instead implement Vm!” because I don’t see how we could ever know what Vm was, even in principle. If we happened to agree on some list Vm, either by coincidence or due to social conditioning or for other reasons, we could agree to implement Vm… which might or might not make the world better, depending on whether Vm happened to be the objectively correct list of moral values. But I don’t see how we could ever, even in principle, confirm or deny this, or correct it if we somehow came to know we had the objectively wrong list.
And if we can’t know or confirm or deny or correct it, even in principle, then I don’t see what is added by discussing it. It seems to me I can just as usefully say, in this case, “I value happiness, freedom, fairness, etc. I will act to maximize those values, and I endorse acting this way,” and nothing is added by saying “Those values comprise morality” except that I’ve asserted a privileged social status for my values.
Well, I am basically asserting that morality is some sort of objective equation, or “abstract idealized dynamic,” as Eliezer calls it, concerned with people’s wellbeing. And I am further asserting that most human beings care very much about this concept. I think this would make the following predictions:
In a situation where a given group of humans had similar levels of empirical knowledge and a similar sanity waterline there would be far more moral agreement among them than would be predicted by chance, and far less moral disagreement than is mentally possible.
It is physically possible to persuade people to change their moral values by reasoned argument.
Inhabitants of a society who are unusually rational and intelligent will be the first people in that society to make moral progress, as they will be better at extrapolating answers out of the “equation.”
If one attempted to convert the moral computations people make into an abstract, idealized process, and determine it’s results, many people would find those results at least somewhat persuasive, and may find their ethical views changed by observing them.
All of these predictions appear to be true:
Human societies tend to have a rather high level of moral agreement between their members. Conformity is not necessarily an indication of rightness, it seems fairly obvious that whole societies have held gravely mistaken moral views, such as those that believed slavery was good. However, it is interesting that all those people in those societies were mistaken in exactly the same way. That seems like evidence that they were all reasoning towards similar conclusions, and the mistakes they made were caused by common environmental factors that impacted all of them. There are other theories that explain this data of course, (peer pressure, for instance), but I still find it striking.
I’ve had moral arguments made by other people change my mind, and changed the minds of other people by moral argument. I’m sure you have also had this experience.
It is well known that intellectuals tend to develop and adopt new moral theories before the general populace does. Common examples of intellectuals whose moral concepts have disseminated into the general populace include John Locke, Jeremy Bentham, and William Lloyd Garrison. Many of these peoples’ principles have since been adopted into the public consciousness.
Ethical theorists who have attempted to derive new ethical principles by working from an abstract, idealized form of ethics have often been very persuasive. To name just one example, Peter Singer ended up turning thousands of people into vegetarians with moral arguments that started on a fairly abstract level.
Asserting that those values comprise morality seems to be effective because it seems to most people that those values are related in some way, because they form the superconcept “morality.” Morality is a useful catchall term for certain types of values, and it would be a shame to lose it.
Still, I suppose that asserting “I value happiness, freedom, fairness, etc” is similar enough to saying “I care about morality” that I really can’t object terribly strongly if that’s what you’d prefer to do.
Why does doing that bother you? Presumably, because you care about the moral concept of fairness, and don’t want to claim an unfair level of status for you and your views. But does it really make sense to say “I care about fairness, but I want to be fair to other people who don’t care about it, so I’ll go ahead and let them treat people unfairly, in order to be fair.” That sounds silly, doesn’t it? it has the same problems that come with being tolerant of intolerant people.
All of those predictions seem equally likely to me whether Sam is right or George is, so don’t really engage with my question at all. At this point, after several trips ’round the mulberry bush, I conclude that this is not because I’m being unclear with my question but rather because you’re choosing not to answer it, so I will stop trying to clarify the question further.
If I map your predictions and observations to the closest analogues that make any sense to me at all, I basically agree with them.
It is.
It doesn’t bother me; it’s a fine thing to do under some circumstances. If we can agree that that’s what we’re doing when we talk about “objective morality,” great. If not (which I find more likely), never mind.
As above, I don’t see what the word “moral” is adding to this sentence. But sure, unfairly claiming status bothers me to the extent that I care about fairness. (That said, I don’t think claiming status by describing my values as “moral” is unfair; pretty much everybody has an equal ability to do it, and indeed they do. I just think it confuses any honest attempt at understanding what’s really going on when we decide on what to do.)
It depends on why and how I value (“care about”) fairness.
If I value it instrumentally (which I do), then it makes perfect sense to say that being fair to people who treat others unfairly is net-valuable, although it might be true or false in any given situation depending on what is achieved by the various kind of fairness that exist in tension in that situation.
Similarly, if I value it in proportion to how much of it there is (which I do), then it makes sense to say that, although it might be true or false depending on how much fairness is gained or lost by doing so.
(nods) Totally. And the ability to phrase ideas in silly-sounding ways is valuable for rhetorical purposes, although it isn’t worth much as an analytical tool.
I’m really sorry, I was trying to kill two birds with one stone and simultaneously engage that question and your later question [“What would I expect to experience differently if, instead, happiness, freedom, fairness, etc. turned out not to be aspects of morality, just like maximizing paperclips does?”] at the same time, and I ended up doing a crappy job of answering both of them. I’ll try to just answer the Sam and George question now.
I’ll start by examining the Pebblesorters P-George and P-Sam. P-George thinks 9 is p-right and 16 is p-wrong. P-Sam thinks 9 is p-wrong and 16 is p-right. They both think they are using the word “p-right” to refer to the same abstract, idealized process. What can they do to see which one is right?
They assume that most other Pebblesorters care about the same abstract process they do, so they can try to persuade them and see how successful they are. Of course, even if all the Pebblesorters agree with one of them, that doesn’t necessarily mean that one is p-correct, those sorters may be making the same mistake as P-George or P-Sam. But I think it’s non-zero Bayesian evidence of the p-rightness of their views.
They can try to control for environmentally caused error by seeing if they can also persuade Pebblesorters who live in different environments and cultures.
They can find the most rational and p-sane Pebblesorting societies and see if they have an easier time persuading them.
They can actually try to extrapolate what the abstract, idealized equation that the word “p-right” represents is and compare it to their views. They read up on Pebblesorter philospher’s theories of p-rightness and see how they correlate with their views. Pebblesorting is much simpler than morality, so we know that the abstract, idealized dynamic that the concept “p-right” represents is “primality.” So we know that P-Sam and P-George are both partly right and partly wrong, 9 and 16 both aren’t prime.
Now let’s translate that into human.
We would expect if Sam was right and George was wrong:
He would have an easier time persuading non-sociopathic humans of the rightness of his views than George, because his views are closer to the results of the equation those people have in their head.
If he went around to different societies with different moral views and attempted to persuade the people there of his views he should, on average, also have an easier time of it than George, again because his views are closer to the results of the equation those people have in their head.
Societies with higher levels of sanity and rationality should be especially easily persuaded, because they are better at determining what the results of that equation would be.
When Sam compared his and George’s views to views generated by various attempts by philosophers to create an abstract idealized version of the equation (ie. moral theories), his view should be a better match to many of them, and the results they generate, than George’s are.
The problem is that the concept of morality is far more complex than the concept of primality, so finding the right abstract idealized equation is harder for humans than it is for Pebblesorters. We still haven’t managed to do it yet. But I think that comparing Sam and George’s views to the best approximations we have so far (various forms of consequentialism, in my view) we can get some Bayesian evidence of the rightness of their views.
If George is right, he will achieve these results instead of Sam. If they are both wrong, they will both fail at doing these things.
Sorry, I was probably being unclear as to what I meant because I was trying to sound clever. When I said it was silly to be fair to unfair people what I meant was that you should not regard their advice on how to best treat other people with the same consideration you’d give to a fair-minded person’s advice.
For instance, you wouldn’t say “I think it’s wrong to enslave black people, but that guy over there thinks it’s right, so let’s compromise and believe it’s okay to enslave them 50% of the time.” I suppose you might pretend to believe that if the other guy had a gun and you didn’t, but you wouldn’t let his beliefs affect yours.
I did not mean that, for example, if you, two fair-minded people, and one unfair-minded person are lost in the woods and find a pie, that you shouldn’t give the unfair-minded person a quarter of the pie to eat. That is an instance where it does make sense to treat unfair people fairly.
OK. Thanks for engaging with the question; that was very helpful. I now have a much better understanding of what you believe the differences-in-practice between moral and non-moral values are.
Just to echo back what I’m hearing you say: to the extent that some set of values Vm is easier to convince humans to adopt than other sets of values and easier to convince sane, rational societies to adopt than less sane, less rational societies and better approximates the moral theories created by philosophers than other sets of values, to that extent we can be confident that Vm is the set of values that comprise morality.
Did I get that right?
Regarding fairmindedness: I endorse giving someone’s advice consideration to the extent that I’m confident that considering their advice will implement my values. And, sure, it’s unlikely that the advice of an unfairminded person would, if considered, implement the value of fairness.
Yes, all those things provide small bits of Bayesian evidence that Vm is closer to morality than some other set of values.