But from the perspective of a hypothetical superintelligent spider living on Mars, donating to projects that effectively help humans is utterly pointless.
OK. But does lack of universality imply lack of objectivity, or lack of realism?
Minimally, an objective truth is not a subjective truth, that is to say, it is not
mind-dependent. Lack of mind dependence does not imply that objective truth needs to be the
same everywhere, which is to say it does not imply universalism. I like to use the analogy of big G and little g in physics. Big G is a universal constant, little g is the local acceleration
due to gravity, and will vary from planet to planet (and, in a fine-grained way, at different points on the earths surface). But little g is perfectly objective, for all its lack of universality.
I understand “moral realism” as a claim that there is a sequence of clever words that would convince the superintelligent spider that reducing human suffering is a good thing.
So it implies lack of realism? Assuming you set the bar for realism rather high. But lack of realism in that sense does not imply subjectivism or error theory.
I suppose “locally objective” would be how I see morality.
Like, there are things you would hypothetically consider morally correct under sufficient reflection, but perhaps you didn’t do the reflection, or maybe you aren’t even good enough at doing reflection. But there is a sense in which you can be objectively wrong about what is the morally right choice. (Sometimes the wrong choice becomes apparent later, when you regret your actions. But this is simply reflection being made easier by seeing the actual consequences instead of having to derive them by thinking.)
But ultimately, morality is a consequence of values, and values exist in brains shaped by evolution and personal history. Other species, or non-biological intelligences, could have dramatically different values.
Now we could play a verbal game whether to define “values/morality” as “whatever given species desires”, and then concluded that other species would most likely have different morality; or define “values/morality” as “whatever neurotypical humans desire, on sufficient reflection”, and then conclude that other species most likely wouldn’t have any morality. But that would be a debate about the map, not the territory.
Values vary plenty between humans, too. Yudkowsky might need “human value” to be a coherent entity for his theories to work, but that isn’t evidence that human value is in fact coherent. And, because values vary, moral systems vary. You don’t have to go to another planet to see multiple tokens of the type “morality”.
OK. But does lack of universality imply lack of objectivity, or lack of realism?
Minimally, an objective truth is not a subjective truth, that is to say, it is not mind-dependent. Lack of mind dependence does not imply that objective truth needs to be the same everywhere, which is to say it does not imply universalism. I like to use the analogy of big G and little g in physics. Big G is a universal constant, little g is the local acceleration due to gravity, and will vary from planet to planet (and, in a fine-grained way, at different points on the earths surface). But little g is perfectly objective, for all its lack of universality.
So it implies lack of realism? Assuming you set the bar for realism rather high. But lack of realism in that sense does not imply subjectivism or error theory.
I suppose “locally objective” would be how I see morality.
Like, there are things you would hypothetically consider morally correct under sufficient reflection, but perhaps you didn’t do the reflection, or maybe you aren’t even good enough at doing reflection. But there is a sense in which you can be objectively wrong about what is the morally right choice. (Sometimes the wrong choice becomes apparent later, when you regret your actions. But this is simply reflection being made easier by seeing the actual consequences instead of having to derive them by thinking.)
But ultimately, morality is a consequence of values, and values exist in brains shaped by evolution and personal history. Other species, or non-biological intelligences, could have dramatically different values.
Now we could play a verbal game whether to define “values/morality” as “whatever given species desires”, and then concluded that other species would most likely have different morality; or define “values/morality” as “whatever neurotypical humans desire, on sufficient reflection”, and then conclude that other species most likely wouldn’t have any morality. But that would be a debate about the map, not the territory.
Values vary plenty between humans, too. Yudkowsky might need “human value” to be a coherent entity for his theories to work, but that isn’t evidence that human value is in fact coherent. And, because values vary, moral systems vary. You don’t have to go to another planet to see multiple tokens of the type “morality”.