This is too long to provide a detailed response. I am not sure you interpret “moral realism” the same way. To me, it means something like this:
Imagine universe without any humans (or any other sentient beings). From my perspective, talking about “morality” in such universe simply does not make sense, this word does not apply to anything that exists there.
(As a hypothetical alternative, if morality was somehow encoded in the laws of physics or something like that, things could possibly be moral or immoral even in a completely dead universe, like maybe a penis-shaped meteorite would be immoral, and a set of craters that coincidentally spell the name of God would be moral.)
The definition of “morality” is suspiciously aligned with some things that humans want. Humans avoid pain; causing pain is immoral. Humans cooperate in large groups; cooperation is moral, betrayal is immoral. Etc. This suggest that if another species evolved with different needs, its definition of morality would be somewhat different. Not completely arbitrary, because you have convergent instrumental goals (every evolved species would probably prefer survival over death, etc.). But an asexually reproducing species might have different intuitions about sexuality; a species that can upload their memories might have different intuition about physical death; a hive mind might have different intuitions about individualism and privacy; etc.
As a more crazy thought experiment, hypothetical beings living in an RPG game where any damage gives you experience points might have intuitions like “pain is good”, and their ideas of torture might include locking people in safe rooms with soft walls where they are unable to hurt themselves, and therefore never gain XP and never level up.
So I use “no moral realism” to mean that morality is somewhat species-dependent.
Imagine universe without any humans (or any other sentient beings). From my perspective, talking about “morality” in such universe simply does not make sense, this word does not apply to anything that exists there.
Depends on who does the talking. Why would presence of something in the universe influence the methodology of judging it (“can’t judge it”), rather than the result of a judgement (“it’s worthless”/”it has no moral relevance”)? (Sounds like corrigibility, a morality that is not in closed form and depends on environment.)
Right. So for that to make sense, the things being judged are not the universe as a whole (or itself), but some sort of parts/aspects abstracted from it, objects of a different kind that are only relevant by somehow relating to it, perhaps “embedded” in it.
This is harder to set up as a guide to decision making, because consequences of actions or decisions are not as isolated from the rest of the universe, but I guess scoped consequentialism (goodhart/boundaries, mild optimization) would want to make some sense of this. Also, updateless decisions isolate abstractions of ignorance about current/future observations.
I agree that if there would no conscious beings there would be no morality, because I think the only good things are pleasurable brain states. I think humans often want things because they’re good. I replied in more detail to the evolutionary debunking argument in the article.
This is too long to provide a detailed response. I am not sure you interpret “moral realism” the same way. To me, it means something like this:
Imagine universe without any humans (or any other sentient beings). From my perspective, talking about “morality” in such universe simply does not make sense, this word does not apply to anything that exists there.
(As a hypothetical alternative, if morality was somehow encoded in the laws of physics or something like that, things could possibly be moral or immoral even in a completely dead universe, like maybe a penis-shaped meteorite would be immoral, and a set of craters that coincidentally spell the name of God would be moral.)
The definition of “morality” is suspiciously aligned with some things that humans want. Humans avoid pain; causing pain is immoral. Humans cooperate in large groups; cooperation is moral, betrayal is immoral. Etc. This suggest that if another species evolved with different needs, its definition of morality would be somewhat different. Not completely arbitrary, because you have convergent instrumental goals (every evolved species would probably prefer survival over death, etc.). But an asexually reproducing species might have different intuitions about sexuality; a species that can upload their memories might have different intuition about physical death; a hive mind might have different intuitions about individualism and privacy; etc.
As a more crazy thought experiment, hypothetical beings living in an RPG game where any damage gives you experience points might have intuitions like “pain is good”, and their ideas of torture might include locking people in safe rooms with soft walls where they are unable to hurt themselves, and therefore never gain XP and never level up.
So I use “no moral realism” to mean that morality is somewhat species-dependent.
Depends on who does the talking. Why would presence of something in the universe influence the methodology of judging it (“can’t judge it”), rather than the result of a judgement (“it’s worthless”/”it has no moral relevance”)? (Sounds like corrigibility, a morality that is not in closed form and depends on environment.)
The absence of something can easily preclude the possibility of judging it.
Right. So for that to make sense, the things being judged are not the universe as a whole (or itself), but some sort of parts/aspects abstracted from it, objects of a different kind that are only relevant by somehow relating to it, perhaps “embedded” in it.
This is harder to set up as a guide to decision making, because consequences of actions or decisions are not as isolated from the rest of the universe, but I guess scoped consequentialism (goodhart/boundaries, mild optimization) would want to make some sense of this. Also, updateless decisions isolate abstractions of ignorance about current/future observations.
I agree that if there would no conscious beings there would be no morality, because I think the only good things are pleasurable brain states. I think humans often want things because they’re good. I replied in more detail to the evolutionary debunking argument in the article.