If there is anything that anyone should in fact do, then I would say that meets the standards of “realism.” For example, it could in principle turn out to be the case that the only normative fact is that the tallest man in the world should smile more. That would be an unusual normative theory, obviously, but I think it would still count as substantively normative.
I’m unsure whether this is a needlessly technical point, but sets of facts about what specific people should do also imply and are implied by facts about what everyone should do. For example, suppose that it’s true that everyone should do what best fulfills their current desires. This broad normative fact would then imply lots of narrow normative facts about what individual people should do. (E.g. “Jane should buy a dog.” “Bob should buy a cat.” “Ed should rob a bank.”) And we could also work backward from these narrow facts to construct the broad fact.
I interpret Eliezer’s post, perhaps wrongly, as focused on a mostly distinct issue. It reads to me like he’s primarily suggesting that for any given normative claim—for example, the claim that everyone should do what best fulfills their current desires or the claim that the tallest man should smile more—there is no argument that could convince any possible mind into believing the claim is true.
So—and I shall take up this theme again later—wherever you are to locate your notions of validity or worth or rationality or justification or even objectivity, it cannot rely on an argument that is universally compelling to all physically possible minds.
I agree with him at least on this point and think that most normative realists would also tend to agree.
Please let me know (either clone of saturn or Said) if it seems like I’m still not quite answering the right question :)
If there is anything that anyone should in fact do, then I would say that meets the standards of “realism.”
Does “anyone” refer to any human, or any possible being?
Because it it refers to humans, we could argue that humans have many things in common. For example, maybe any (non-psychopathic) human should donate at least a little to effective altruism, because effective altruism brings the change they would wish to happen.
But from the perspective of a hypothetical superintelligent spider living on Mars, donating to projects that effectively help humans is utterly pointless. (Assuming that spiders, even superintelligent ones, have zero empathy.)
I understand “moral realism” as a claim that there is a sequence of clever words that would convince the superintelligent spider that reducing human suffering is a good thing. Not merely because humans might reciprocate, or because it would mean more food for the spider once the space train to Mars is built, but because that is simply the right thing to do. Such thing, I believe, does not exist.
If there is anything that anyone should in fact do, then I would say that meets the standards of “realism.”
Does “anyone” refer to any human, or any possible being?
Sorry, I should have been clearer. I mean to say: “If there exists at least one entity, such that the entity should do something, then that meets the standards of ‘realism.’”
I understand “moral realism” as a claim that there is a sequence of clever words that would convince the superintelligent spider that reducing human suffering is a good thing.
I don’t think I’m aware of anyone who identifies as a “moral realist” who believes this. At least, it’s not part of a normal definition of “moral realism.”
The term “moral realism” is used differently by different people, but typically it’s either used roughly synonymously with “normative realism” (as I’ve defined it in this post) or to pick out a slightly more specific position: that normative realism is true and that people should do things besides just try to fulfill their own preferences.
But from the perspective of a hypothetical superintelligent spider living on Mars, donating to projects that effectively help humans is utterly pointless.
OK. But does lack of universality imply lack of objectivity, or lack of realism?
Minimally, an objective truth is not a subjective truth, that is to say, it is not
mind-dependent. Lack of mind dependence does not imply that objective truth needs to be the
same everywhere, which is to say it does not imply universalism. I like to use the analogy of big G and little g in physics. Big G is a universal constant, little g is the local acceleration
due to gravity, and will vary from planet to planet (and, in a fine-grained way, at different points on the earths surface). But little g is perfectly objective, for all its lack of universality.
I understand “moral realism” as a claim that there is a sequence of clever words that would convince the superintelligent spider that reducing human suffering is a good thing.
So it implies lack of realism? Assuming you set the bar for realism rather high. But lack of realism in that sense does not imply subjectivism or error theory.
I suppose “locally objective” would be how I see morality.
Like, there are things you would hypothetically consider morally correct under sufficient reflection, but perhaps you didn’t do the reflection, or maybe you aren’t even good enough at doing reflection. But there is a sense in which you can be objectively wrong about what is the morally right choice. (Sometimes the wrong choice becomes apparent later, when you regret your actions. But this is simply reflection being made easier by seeing the actual consequences instead of having to derive them by thinking.)
But ultimately, morality is a consequence of values, and values exist in brains shaped by evolution and personal history. Other species, or non-biological intelligences, could have dramatically different values.
Now we could play a verbal game whether to define “values/morality” as “whatever given species desires”, and then concluded that other species would most likely have different morality; or define “values/morality” as “whatever neurotypical humans desire, on sufficient reflection”, and then conclude that other species most likely wouldn’t have any morality. But that would be a debate about the map, not the territory.
Values vary plenty between humans, too. Yudkowsky might need “human value” to be a coherent entity for his theories to work, but that isn’t evidence that human value is in fact coherent. And, because values vary, moral systems vary. You don’t have to go to another planet to see multiple tokens of the type “morality”.
If there is anything that anyone should in fact do, then I would say that meets the standards of “realism.”
You should in fact pay your taxes. Which is to say that if a socially defined obligation is enough, then realism is true. But that might be setting the bar too low.
If there is anything that anyone should in fact do, then I would say that meets the standards of “realism.” For example, it could in principle turn out to be the case that the only normative fact is that the tallest man in the world should smile more. That would be an unusual normative theory, obviously, but I think it would still count as substantively normative.
I’m unsure whether this is a needlessly technical point, but sets of facts about what specific people should do also imply and are implied by facts about what everyone should do. For example, suppose that it’s true that everyone should do what best fulfills their current desires. This broad normative fact would then imply lots of narrow normative facts about what individual people should do. (E.g. “Jane should buy a dog.” “Bob should buy a cat.” “Ed should rob a bank.”) And we could also work backward from these narrow facts to construct the broad fact.
I interpret Eliezer’s post, perhaps wrongly, as focused on a mostly distinct issue. It reads to me like he’s primarily suggesting that for any given normative claim—for example, the claim that everyone should do what best fulfills their current desires or the claim that the tallest man should smile more—there is no argument that could convince any possible mind into believing the claim is true.
I agree with him at least on this point and think that most normative realists would also tend to agree.
Please let me know (either clone of saturn or Said) if it seems like I’m still not quite answering the right question :)
Does “anyone” refer to any human, or any possible being?
Because it it refers to humans, we could argue that humans have many things in common. For example, maybe any (non-psychopathic) human should donate at least a little to effective altruism, because effective altruism brings the change they would wish to happen.
But from the perspective of a hypothetical superintelligent spider living on Mars, donating to projects that effectively help humans is utterly pointless. (Assuming that spiders, even superintelligent ones, have zero empathy.)
I understand “moral realism” as a claim that there is a sequence of clever words that would convince the superintelligent spider that reducing human suffering is a good thing. Not merely because humans might reciprocate, or because it would mean more food for the spider once the space train to Mars is built, but because that is simply the right thing to do. Such thing, I believe, does not exist.
Sorry, I should have been clearer. I mean to say: “If there exists at least one entity, such that the entity should do something, then that meets the standards of ‘realism.’”
I don’t think I’m aware of anyone who identifies as a “moral realist” who believes this. At least, it’s not part of a normal definition of “moral realism.”
The term “moral realism” is used differently by different people, but typically it’s either used roughly synonymously with “normative realism” (as I’ve defined it in this post) or to pick out a slightly more specific position: that normative realism is true and that people should do things besides just try to fulfill their own preferences.
Some people seem to believe that about artificial intelligence. (Which will likely be more different from us than spiders are.)
OK. But does lack of universality imply lack of objectivity, or lack of realism?
Minimally, an objective truth is not a subjective truth, that is to say, it is not mind-dependent. Lack of mind dependence does not imply that objective truth needs to be the same everywhere, which is to say it does not imply universalism. I like to use the analogy of big G and little g in physics. Big G is a universal constant, little g is the local acceleration due to gravity, and will vary from planet to planet (and, in a fine-grained way, at different points on the earths surface). But little g is perfectly objective, for all its lack of universality.
So it implies lack of realism? Assuming you set the bar for realism rather high. But lack of realism in that sense does not imply subjectivism or error theory.
I suppose “locally objective” would be how I see morality.
Like, there are things you would hypothetically consider morally correct under sufficient reflection, but perhaps you didn’t do the reflection, or maybe you aren’t even good enough at doing reflection. But there is a sense in which you can be objectively wrong about what is the morally right choice. (Sometimes the wrong choice becomes apparent later, when you regret your actions. But this is simply reflection being made easier by seeing the actual consequences instead of having to derive them by thinking.)
But ultimately, morality is a consequence of values, and values exist in brains shaped by evolution and personal history. Other species, or non-biological intelligences, could have dramatically different values.
Now we could play a verbal game whether to define “values/morality” as “whatever given species desires”, and then concluded that other species would most likely have different morality; or define “values/morality” as “whatever neurotypical humans desire, on sufficient reflection”, and then conclude that other species most likely wouldn’t have any morality. But that would be a debate about the map, not the territory.
Values vary plenty between humans, too. Yudkowsky might need “human value” to be a coherent entity for his theories to work, but that isn’t evidence that human value is in fact coherent. And, because values vary, moral systems vary. You don’t have to go to another planet to see multiple tokens of the type “morality”.
You should in fact pay your taxes. Which is to say that if a socially defined obligation is enough, then realism is true. But that might be setting the bar too low.