One attractor in the space of moral systems that doesn’t have much to do with what could be engineered into brains is the class of moral systems that are favoured by natural selection.
There’s no such essence in a classical philosophical sense in naturalistic metaethics. Rather, the transcendence comes from morality-as-computation’s ability to be abstracted away from brains, like arithmetic can be abstracted away from calculators. “Concerns generated by our brains”, for example, breaks down when we try to instill morality into a FAI that might be profoundly non-human and non-person.
It sounds a lot as if you are suggesting that there is some essence to morality which transcends “concerns are generated by our brains”.
“still taking morality seriously” modifies “one [who can...]”, not “brains with different concerns”.
I’m not sure why you came to think there was some confusion on this point, so I will not presume to suggest where you went wrong in your reading.
One attractor in the space of moral systems that doesn’t have much to do with what could be engineered into brains is the class of moral systems that are favoured by natural selection.
There’s no such essence in a classical philosophical sense in naturalistic metaethics. Rather, the transcendence comes from morality-as-computation’s ability to be abstracted away from brains, like arithmetic can be abstracted away from calculators. “Concerns generated by our brains”, for example, breaks down when we try to instill morality into a FAI that might be profoundly non-human and non-person.