This is sensible enough as a theory of morality, but you still haven’t accounted for ethics, or the practice of engaging in interpersonal arguments about moral values. If Bob!morality is so clearly distinct from Frank!morality, why would Bob and Frank even want to engage in ethical reasoning and debate? Is it just a coincidence that we do, or is there some deeper explanation?
A possible explanation: we need to use ethical debate as a way of compromising and defusing potential conflicts. If Bob and Frank couldn’t debate their values, they would probably have to resort to violence and coercion, which most folks would see as morally bad.
Well, I agree with your second paragraph as a possible reason, which on its own I think would be enough to make most actual people do ethics.
And while Bob and Frank have clearly distinct moralities, since both of them were created by highly similar circumstances and processes (i.e. those that produce humans brains), it seems very likely that there’s more than just one or two things on which they would agree.
As for other reasons to do ethics, I think the part of Frank!morality that takes Bob!morality as an input is usually rather important, at least in a context where Frank and Bob are both humans in the same tribe. Which means Frank wants to know Bob!morality, otherwise Frank!morality has incomplete information with which to evaluate things, which is more likely to lead to sub-optimal estimates of Frank’s moral preferences as they would be if Frank had known Bob’s true moral preferences.
Frank wants to maximize the true Frank!morality, which has a component for Bob!morality, and probability says incomplete information on Bob!morality leads to lower expected Frank!morality.
If we add more players, eventually it gets to a point where you can’t keep track of all the X!morality, and so you try to build approximations and aggregations of common patterns of morality and shared values among members of the groups that Frank!morality evaluates over. Frank also wants to find the best possible game-theoretic “compromise”, since others having more of their morality means they are less likely to act against Frank!morality by social commitment, ethical reasoning, game-theoretic reasoning, or any other form of cooperation.
Ethics basically appears to me like a natural Nash equilibrium, and meta-ethics the best route towards Pareto optima. These are rough pattern-matching guesses, though, since what numbers would I be crunching? I don’t have the actual algorithms of actual humans to work with, of course.
This is sensible enough as a theory of morality, but you still haven’t accounted for ethics, or the practice of engaging in interpersonal arguments about moral values. If Bob!morality is so clearly distinct from Frank!morality, why would Bob and Frank even want to engage in ethical reasoning and debate? Is it just a coincidence that we do, or is there some deeper explanation?
A possible explanation: we need to use ethical debate as a way of compromising and defusing potential conflicts. If Bob and Frank couldn’t debate their values, they would probably have to resort to violence and coercion, which most folks would see as morally bad.
Well, I agree with your second paragraph as a possible reason, which on its own I think would be enough to make most actual people do ethics.
And while Bob and Frank have clearly distinct moralities, since both of them were created by highly similar circumstances and processes (i.e. those that produce humans brains), it seems very likely that there’s more than just one or two things on which they would agree.
As for other reasons to do ethics, I think the part of Frank!morality that takes Bob!morality as an input is usually rather important, at least in a context where Frank and Bob are both humans in the same tribe. Which means Frank wants to know Bob!morality, otherwise Frank!morality has incomplete information with which to evaluate things, which is more likely to lead to sub-optimal estimates of Frank’s moral preferences as they would be if Frank had known Bob’s true moral preferences.
Frank wants to maximize the true Frank!morality, which has a component for Bob!morality, and probability says incomplete information on Bob!morality leads to lower expected Frank!morality.
If we add more players, eventually it gets to a point where you can’t keep track of all the X!morality, and so you try to build approximations and aggregations of common patterns of morality and shared values among members of the groups that Frank!morality evaluates over. Frank also wants to find the best possible game-theoretic “compromise”, since others having more of their morality means they are less likely to act against Frank!morality by social commitment, ethical reasoning, game-theoretic reasoning, or any other form of cooperation.
Ethics basically appears to me like a natural Nash equilibrium, and meta-ethics the best route towards Pareto optima. These are rough pattern-matching guesses, though, since what numbers would I be crunching? I don’t have the actual algorithms of actual humans to work with, of course.