It seems like you’ve set up a dichotomy between there being universally compelling normative statements versus normative statements being meaningless, but what about the position that specific subsets of possible statements are compelling to specific people? Would that be realist, anti-realist, or neither?
If you mean “compelling” in the sense of “convincing” or “motivating,” then I actually don’t mean to suggest there are any “universally compelling normative statements.” I think it’s totally possible for there to be something that somone “should” do (e.g. being vegetarian), without this person either believing they should do it or acting on their belief.
This doesn’t seem too problematic to me, though, since most other kinds of statements also fail to be at least universally convincing. For example, I also think that the statement “the universe is billions of years old” is both true and not-universally-convincing. Some philosophers do still argue, though, that the failure of normative beliefs to consistently motivate people is a serious challenge for normative realism.
So the question that clone of saturn was asking, it seems to me (he can correct me if I’m misinterpreting) is: suppose I claim that it’s the case that Bob, or all humans, or all Americans living in Florida whose name begins with a ‘B’, or any other proper subset A of “all agents”, should do X. (And suppose that X is a general injunction, in which all terms are properly quantified, etc., so that its limited applicability is not due to any particular features of the situation(s) which agents in subset A find themselves in; in other words, “agents outside subset A should also do X” could be true, but—I claim—it is not.)
Now, is this realism, or anti-realism? I would not assent to the claim that “All agents should do [properly quantified] X”; yet nor would I assent to the claim “There is no fact of the matter about whether agents in subset A should do X”!
If there is anything that anyone should in fact do, then I would say that meets the standards of “realism.” For example, it could in principle turn out to be the case that the only normative fact is that the tallest man in the world should smile more. That would be an unusual normative theory, obviously, but I think it would still count as substantively normative.
I’m unsure whether this is a needlessly technical point, but sets of facts about what specific people should do also imply and are implied by facts about what everyone should do. For example, suppose that it’s true that everyone should do what best fulfills their current desires. This broad normative fact would then imply lots of narrow normative facts about what individual people should do. (E.g. “Jane should buy a dog.” “Bob should buy a cat.” “Ed should rob a bank.”) And we could also work backward from these narrow facts to construct the broad fact.
I interpret Eliezer’s post, perhaps wrongly, as focused on a mostly distinct issue. It reads to me like he’s primarily suggesting that for any given normative claim—for example, the claim that everyone should do what best fulfills their current desires or the claim that the tallest man should smile more—there is no argument that could convince any possible mind into believing the claim is true.
So—and I shall take up this theme again later—wherever you are to locate your notions of validity or worth or rationality or justification or even objectivity, it cannot rely on an argument that is universally compelling to all physically possible minds.
I agree with him at least on this point and think that most normative realists would also tend to agree.
Please let me know (either clone of saturn or Said) if it seems like I’m still not quite answering the right question :)
If there is anything that anyone should in fact do, then I would say that meets the standards of “realism.”
Does “anyone” refer to any human, or any possible being?
Because it it refers to humans, we could argue that humans have many things in common. For example, maybe any (non-psychopathic) human should donate at least a little to effective altruism, because effective altruism brings the change they would wish to happen.
But from the perspective of a hypothetical superintelligent spider living on Mars, donating to projects that effectively help humans is utterly pointless. (Assuming that spiders, even superintelligent ones, have zero empathy.)
I understand “moral realism” as a claim that there is a sequence of clever words that would convince the superintelligent spider that reducing human suffering is a good thing. Not merely because humans might reciprocate, or because it would mean more food for the spider once the space train to Mars is built, but because that is simply the right thing to do. Such thing, I believe, does not exist.
If there is anything that anyone should in fact do, then I would say that meets the standards of “realism.”
Does “anyone” refer to any human, or any possible being?
Sorry, I should have been clearer. I mean to say: “If there exists at least one entity, such that the entity should do something, then that meets the standards of ‘realism.’”
I understand “moral realism” as a claim that there is a sequence of clever words that would convince the superintelligent spider that reducing human suffering is a good thing.
I don’t think I’m aware of anyone who identifies as a “moral realist” who believes this. At least, it’s not part of a normal definition of “moral realism.”
The term “moral realism” is used differently by different people, but typically it’s either used roughly synonymously with “normative realism” (as I’ve defined it in this post) or to pick out a slightly more specific position: that normative realism is true and that people should do things besides just try to fulfill their own preferences.
But from the perspective of a hypothetical superintelligent spider living on Mars, donating to projects that effectively help humans is utterly pointless.
OK. But does lack of universality imply lack of objectivity, or lack of realism?
Minimally, an objective truth is not a subjective truth, that is to say, it is not
mind-dependent. Lack of mind dependence does not imply that objective truth needs to be the
same everywhere, which is to say it does not imply universalism. I like to use the analogy of big G and little g in physics. Big G is a universal constant, little g is the local acceleration
due to gravity, and will vary from planet to planet (and, in a fine-grained way, at different points on the earths surface). But little g is perfectly objective, for all its lack of universality.
I understand “moral realism” as a claim that there is a sequence of clever words that would convince the superintelligent spider that reducing human suffering is a good thing.
So it implies lack of realism? Assuming you set the bar for realism rather high. But lack of realism in that sense does not imply subjectivism or error theory.
I suppose “locally objective” would be how I see morality.
Like, there are things you would hypothetically consider morally correct under sufficient reflection, but perhaps you didn’t do the reflection, or maybe you aren’t even good enough at doing reflection. But there is a sense in which you can be objectively wrong about what is the morally right choice. (Sometimes the wrong choice becomes apparent later, when you regret your actions. But this is simply reflection being made easier by seeing the actual consequences instead of having to derive them by thinking.)
But ultimately, morality is a consequence of values, and values exist in brains shaped by evolution and personal history. Other species, or non-biological intelligences, could have dramatically different values.
Now we could play a verbal game whether to define “values/morality” as “whatever given species desires”, and then concluded that other species would most likely have different morality; or define “values/morality” as “whatever neurotypical humans desire, on sufficient reflection”, and then conclude that other species most likely wouldn’t have any morality. But that would be a debate about the map, not the territory.
Values vary plenty between humans, too. Yudkowsky might need “human value” to be a coherent entity for his theories to work, but that isn’t evidence that human value is in fact coherent. And, because values vary, moral systems vary. You don’t have to go to another planet to see multiple tokens of the type “morality”.
If there is anything that anyone should in fact do, then I would say that meets the standards of “realism.”
You should in fact pay your taxes. Which is to say that if a socially defined obligation is enough, then realism is true. But that might be setting the bar too low.
It seems like you’ve set up a dichotomy between there being universally compelling normative statements versus normative statements being meaningless, but what about the position that specific subsets of possible statements are compelling to specific people? Would that be realist, anti-realist, or neither?
If you mean “compelling” in the sense of “convincing” or “motivating,” then I actually don’t mean to suggest there are any “universally compelling normative statements.” I think it’s totally possible for there to be something that somone “should” do (e.g. being vegetarian), without this person either believing they should do it or acting on their belief.
This doesn’t seem too problematic to me, though, since most other kinds of statements also fail to be at least universally convincing. For example, I also think that the statement “the universe is billions of years old” is both true and not-universally-convincing. Some philosophers do still argue, though, that the failure of normative beliefs to consistently motivate people is a serious challenge for normative realism.
I think you’ve misunderstood the question, actually. “Compelling” here is to be read as in “No Universally Compelling Arguments”.
So the question that clone of saturn was asking, it seems to me (he can correct me if I’m misinterpreting) is: suppose I claim that it’s the case that Bob, or all humans, or all Americans living in Florida whose name begins with a ‘B’, or any other proper subset A of “all agents”, should do X. (And suppose that X is a general injunction, in which all terms are properly quantified, etc., so that its limited applicability is not due to any particular features of the situation(s) which agents in subset A find themselves in; in other words, “agents outside subset A should also do X” could be true, but—I claim—it is not.)
Now, is this realism, or anti-realism? I would not assent to the claim that “All agents should do [properly quantified] X”; yet nor would I assent to the claim “There is no fact of the matter about whether agents in subset A should do X”!
If there is anything that anyone should in fact do, then I would say that meets the standards of “realism.” For example, it could in principle turn out to be the case that the only normative fact is that the tallest man in the world should smile more. That would be an unusual normative theory, obviously, but I think it would still count as substantively normative.
I’m unsure whether this is a needlessly technical point, but sets of facts about what specific people should do also imply and are implied by facts about what everyone should do. For example, suppose that it’s true that everyone should do what best fulfills their current desires. This broad normative fact would then imply lots of narrow normative facts about what individual people should do. (E.g. “Jane should buy a dog.” “Bob should buy a cat.” “Ed should rob a bank.”) And we could also work backward from these narrow facts to construct the broad fact.
I interpret Eliezer’s post, perhaps wrongly, as focused on a mostly distinct issue. It reads to me like he’s primarily suggesting that for any given normative claim—for example, the claim that everyone should do what best fulfills their current desires or the claim that the tallest man should smile more—there is no argument that could convince any possible mind into believing the claim is true.
I agree with him at least on this point and think that most normative realists would also tend to agree.
Please let me know (either clone of saturn or Said) if it seems like I’m still not quite answering the right question :)
Does “anyone” refer to any human, or any possible being?
Because it it refers to humans, we could argue that humans have many things in common. For example, maybe any (non-psychopathic) human should donate at least a little to effective altruism, because effective altruism brings the change they would wish to happen.
But from the perspective of a hypothetical superintelligent spider living on Mars, donating to projects that effectively help humans is utterly pointless. (Assuming that spiders, even superintelligent ones, have zero empathy.)
I understand “moral realism” as a claim that there is a sequence of clever words that would convince the superintelligent spider that reducing human suffering is a good thing. Not merely because humans might reciprocate, or because it would mean more food for the spider once the space train to Mars is built, but because that is simply the right thing to do. Such thing, I believe, does not exist.
Sorry, I should have been clearer. I mean to say: “If there exists at least one entity, such that the entity should do something, then that meets the standards of ‘realism.’”
I don’t think I’m aware of anyone who identifies as a “moral realist” who believes this. At least, it’s not part of a normal definition of “moral realism.”
The term “moral realism” is used differently by different people, but typically it’s either used roughly synonymously with “normative realism” (as I’ve defined it in this post) or to pick out a slightly more specific position: that normative realism is true and that people should do things besides just try to fulfill their own preferences.
Some people seem to believe that about artificial intelligence. (Which will likely be more different from us than spiders are.)
OK. But does lack of universality imply lack of objectivity, or lack of realism?
Minimally, an objective truth is not a subjective truth, that is to say, it is not mind-dependent. Lack of mind dependence does not imply that objective truth needs to be the same everywhere, which is to say it does not imply universalism. I like to use the analogy of big G and little g in physics. Big G is a universal constant, little g is the local acceleration due to gravity, and will vary from planet to planet (and, in a fine-grained way, at different points on the earths surface). But little g is perfectly objective, for all its lack of universality.
So it implies lack of realism? Assuming you set the bar for realism rather high. But lack of realism in that sense does not imply subjectivism or error theory.
I suppose “locally objective” would be how I see morality.
Like, there are things you would hypothetically consider morally correct under sufficient reflection, but perhaps you didn’t do the reflection, or maybe you aren’t even good enough at doing reflection. But there is a sense in which you can be objectively wrong about what is the morally right choice. (Sometimes the wrong choice becomes apparent later, when you regret your actions. But this is simply reflection being made easier by seeing the actual consequences instead of having to derive them by thinking.)
But ultimately, morality is a consequence of values, and values exist in brains shaped by evolution and personal history. Other species, or non-biological intelligences, could have dramatically different values.
Now we could play a verbal game whether to define “values/morality” as “whatever given species desires”, and then concluded that other species would most likely have different morality; or define “values/morality” as “whatever neurotypical humans desire, on sufficient reflection”, and then conclude that other species most likely wouldn’t have any morality. But that would be a debate about the map, not the territory.
Values vary plenty between humans, too. Yudkowsky might need “human value” to be a coherent entity for his theories to work, but that isn’t evidence that human value is in fact coherent. And, because values vary, moral systems vary. You don’t have to go to another planet to see multiple tokens of the type “morality”.
You should in fact pay your taxes. Which is to say that if a socially defined obligation is enough, then realism is true. But that might be setting the bar too low.