I’d be extremely surprised if there turned out to be some Platonic ideal of a moral system that we can compare against. But it seems fairly clear to me that the moral systems we adopt influence factors which can be objectively investigated, i.e. happiness in individuals (however defined) or stability in societies, and that moral systems can be productively thought of as commensurable with each other along these axes. Since some aspects of our emotional responses are almost certainly innate, it also seems clear to me that the observable qualities of moral systems depend partly on more or less fixed qualities rather than the internal architecture of the moral system in question.
However, it seems unlikely to me that all of these fixed qualities are human universals, i.e. that there are going to be universally relevant “is” values from which we can derive solutions to arbitrary “ought” questions. Certain points within human mind-design-space are likely to respond differently than others to given moral systems, at least on the object level. Additionally, I think it’s unlikely that the observable output of moral systems depends purely on their hosts’ fixed qualities: identity maintenance and related processes set up feedback loops, and we can also expect other active moral systems nearby to play a role in their mutual success.
I’d expect, but cannot prove, the success of a moral system in guaranteeing the happiness of its adherents or the stability of their societies to be governed more by local conditions and biology (species-wide or of particular humans) and less by game-theoretic considerations. Conversely, I’d expect the success of a moral system in handling other moral systems to have more of a game-theoretic flavor, and higher meta-levels to be more game-theoretic still.
I have no idea where any of this places me in the taxonomy of moral philosophy.
I’d be extremely surprised if there turned out to be some Platonic ideal of a moral system that we can compare against. But it seems fairly clear to me that the moral systems we adopt influence factors which can be objectively investigated, i.e. happiness in individuals (however defined) or stability in societies, and that moral systems can be productively thought of as commensurable with each other along these axes. Since some aspects of our emotional responses are almost certainly innate, it also seems clear to me that the observable qualities of moral systems depend partly on more or less fixed qualities rather than the internal architecture of the moral system in question.
However, it seems unlikely to me that all of these fixed qualities are human universals, i.e. that there are going to be universally relevant “is” values from which we can derive solutions to arbitrary “ought” questions. Certain points within human mind-design-space are likely to respond differently than others to given moral systems, at least on the object level. Additionally, I think it’s unlikely that the observable output of moral systems depends purely on their hosts’ fixed qualities: identity maintenance and related processes set up feedback loops, and we can also expect other active moral systems nearby to play a role in their mutual success.
I’d expect, but cannot prove, the success of a moral system in guaranteeing the happiness of its adherents or the stability of their societies to be governed more by local conditions and biology (species-wide or of particular humans) and less by game-theoretic considerations. Conversely, I’d expect the success of a moral system in handling other moral systems to have more of a game-theoretic flavor, and higher meta-levels to be more game-theoretic still.
I have no idea where any of this places me in the taxonomy of moral philosophy.