If there is anything that anyone should in fact do, then I would say that meets the standards of “realism.”
Does “anyone” refer to any human, or any possible being?
Because it it refers to humans, we could argue that humans have many things in common. For example, maybe any (non-psychopathic) human should donate at least a little to effective altruism, because effective altruism brings the change they would wish to happen.
But from the perspective of a hypothetical superintelligent spider living on Mars, donating to projects that effectively help humans is utterly pointless. (Assuming that spiders, even superintelligent ones, have zero empathy.)
I understand “moral realism” as a claim that there is a sequence of clever words that would convince the superintelligent spider that reducing human suffering is a good thing. Not merely because humans might reciprocate, or because it would mean more food for the spider once the space train to Mars is built, but because that is simply the right thing to do. Such thing, I believe, does not exist.
If there is anything that anyone should in fact do, then I would say that meets the standards of “realism.”
Does “anyone” refer to any human, or any possible being?
Sorry, I should have been clearer. I mean to say: “If there exists at least one entity, such that the entity should do something, then that meets the standards of ‘realism.’”
I understand “moral realism” as a claim that there is a sequence of clever words that would convince the superintelligent spider that reducing human suffering is a good thing.
I don’t think I’m aware of anyone who identifies as a “moral realist” who believes this. At least, it’s not part of a normal definition of “moral realism.”
The term “moral realism” is used differently by different people, but typically it’s either used roughly synonymously with “normative realism” (as I’ve defined it in this post) or to pick out a slightly more specific position: that normative realism is true and that people should do things besides just try to fulfill their own preferences.
But from the perspective of a hypothetical superintelligent spider living on Mars, donating to projects that effectively help humans is utterly pointless.
OK. But does lack of universality imply lack of objectivity, or lack of realism?
Minimally, an objective truth is not a subjective truth, that is to say, it is not
mind-dependent. Lack of mind dependence does not imply that objective truth needs to be the
same everywhere, which is to say it does not imply universalism. I like to use the analogy of big G and little g in physics. Big G is a universal constant, little g is the local acceleration
due to gravity, and will vary from planet to planet (and, in a fine-grained way, at different points on the earths surface). But little g is perfectly objective, for all its lack of universality.
I understand “moral realism” as a claim that there is a sequence of clever words that would convince the superintelligent spider that reducing human suffering is a good thing.
So it implies lack of realism? Assuming you set the bar for realism rather high. But lack of realism in that sense does not imply subjectivism or error theory.
I suppose “locally objective” would be how I see morality.
Like, there are things you would hypothetically consider morally correct under sufficient reflection, but perhaps you didn’t do the reflection, or maybe you aren’t even good enough at doing reflection. But there is a sense in which you can be objectively wrong about what is the morally right choice. (Sometimes the wrong choice becomes apparent later, when you regret your actions. But this is simply reflection being made easier by seeing the actual consequences instead of having to derive them by thinking.)
But ultimately, morality is a consequence of values, and values exist in brains shaped by evolution and personal history. Other species, or non-biological intelligences, could have dramatically different values.
Now we could play a verbal game whether to define “values/morality” as “whatever given species desires”, and then concluded that other species would most likely have different morality; or define “values/morality” as “whatever neurotypical humans desire, on sufficient reflection”, and then conclude that other species most likely wouldn’t have any morality. But that would be a debate about the map, not the territory.
Values vary plenty between humans, too. Yudkowsky might need “human value” to be a coherent entity for his theories to work, but that isn’t evidence that human value is in fact coherent. And, because values vary, moral systems vary. You don’t have to go to another planet to see multiple tokens of the type “morality”.
Does “anyone” refer to any human, or any possible being?
Because it it refers to humans, we could argue that humans have many things in common. For example, maybe any (non-psychopathic) human should donate at least a little to effective altruism, because effective altruism brings the change they would wish to happen.
But from the perspective of a hypothetical superintelligent spider living on Mars, donating to projects that effectively help humans is utterly pointless. (Assuming that spiders, even superintelligent ones, have zero empathy.)
I understand “moral realism” as a claim that there is a sequence of clever words that would convince the superintelligent spider that reducing human suffering is a good thing. Not merely because humans might reciprocate, or because it would mean more food for the spider once the space train to Mars is built, but because that is simply the right thing to do. Such thing, I believe, does not exist.
Sorry, I should have been clearer. I mean to say: “If there exists at least one entity, such that the entity should do something, then that meets the standards of ‘realism.’”
I don’t think I’m aware of anyone who identifies as a “moral realist” who believes this. At least, it’s not part of a normal definition of “moral realism.”
The term “moral realism” is used differently by different people, but typically it’s either used roughly synonymously with “normative realism” (as I’ve defined it in this post) or to pick out a slightly more specific position: that normative realism is true and that people should do things besides just try to fulfill their own preferences.
Some people seem to believe that about artificial intelligence. (Which will likely be more different from us than spiders are.)
OK. But does lack of universality imply lack of objectivity, or lack of realism?
Minimally, an objective truth is not a subjective truth, that is to say, it is not mind-dependent. Lack of mind dependence does not imply that objective truth needs to be the same everywhere, which is to say it does not imply universalism. I like to use the analogy of big G and little g in physics. Big G is a universal constant, little g is the local acceleration due to gravity, and will vary from planet to planet (and, in a fine-grained way, at different points on the earths surface). But little g is perfectly objective, for all its lack of universality.
So it implies lack of realism? Assuming you set the bar for realism rather high. But lack of realism in that sense does not imply subjectivism or error theory.
I suppose “locally objective” would be how I see morality.
Like, there are things you would hypothetically consider morally correct under sufficient reflection, but perhaps you didn’t do the reflection, or maybe you aren’t even good enough at doing reflection. But there is a sense in which you can be objectively wrong about what is the morally right choice. (Sometimes the wrong choice becomes apparent later, when you regret your actions. But this is simply reflection being made easier by seeing the actual consequences instead of having to derive them by thinking.)
But ultimately, morality is a consequence of values, and values exist in brains shaped by evolution and personal history. Other species, or non-biological intelligences, could have dramatically different values.
Now we could play a verbal game whether to define “values/morality” as “whatever given species desires”, and then concluded that other species would most likely have different morality; or define “values/morality” as “whatever neurotypical humans desire, on sufficient reflection”, and then conclude that other species most likely wouldn’t have any morality. But that would be a debate about the map, not the territory.
Values vary plenty between humans, too. Yudkowsky might need “human value” to be a coherent entity for his theories to work, but that isn’t evidence that human value is in fact coherent. And, because values vary, moral systems vary. You don’t have to go to another planet to see multiple tokens of the type “morality”.