Although they disagree about some very fundamental questions, they seem to broadly agree on a lot of actions.
I think this is mixing up cause and effect.
People instinctively find certain things moral. One of them is saving drowning children.
Ethical theories are our attempts to try to find order in our moral impulses. Of course they all save the drowning child, because any that didn’t wouldn’t describe how humans actually behave in practice, and so wouldn’t be good ethical theories.
It’s similar to someone being surprised that Newton’s theories predict results that are so similar to Einstein’s even though they were wrong. But Newton would never have suggested his theories if they didn’t accurately predict the Einsteinian world we actually live in.
I’m not sure I’m entirely persuaded. Are you saying that the goal of ethics is to accurately predict what people’s moral impulse will be in arbitrary situations?
I think moral impulses have changed with times, and it’s notable that some people (Bentham, for example) managed to think hard about ethics and arrive at conclusions which massively preempted later shifts in moral values.
Like, Newton’s theories give you a good way to predict what you’ll see when you throw a ball in the air, but it feels incorrect to me to say that Newton’s goal was to find order in our sensory experience of ball throwing. Do you think that there are in fact ordered moral laws that we’re subject to, which our impulses respond to, and which we’re trying to hone in on?
I’m not saying that’s the explicit goal. I’m saying that in practice, if someone suggests a moral theory which doesn’t reflect how humans actually feel about most actions nobody is going to accept it.
The underlying human drive behind moral theories is to find order in our moral impulses, even if that’s not the system’s goal
Newton’s theories give you a good way to predict what you’ll see when you throw a ball in the air, but it feels incorrect to me to say that Newton’s goal was to find order in our sensory experience of ball throwing.
I like this framing! The entire point of having a theory is to predict experimental data, and the only way I can collect data is through my senses.
Do you think that there are in fact ordered moral laws that we’re subject to, which our impulses respond to, and which we’re trying to hone in on?
You could construct predictive models of people’s moral impulses. I wouldn’t call these models laws, though.
I think this is mixing up cause and effect.
People instinctively find certain things moral. One of them is saving drowning children.
Ethical theories are our attempts to try to find order in our moral impulses. Of course they all save the drowning child, because any that didn’t wouldn’t describe how humans actually behave in practice, and so wouldn’t be good ethical theories.
It’s similar to someone being surprised that Newton’s theories predict results that are so similar to Einstein’s even though they were wrong. But Newton would never have suggested his theories if they didn’t accurately predict the Einsteinian world we actually live in.
I’m not sure I’m entirely persuaded. Are you saying that the goal of ethics is to accurately predict what people’s moral impulse will be in arbitrary situations?
I think moral impulses have changed with times, and it’s notable that some people (Bentham, for example) managed to think hard about ethics and arrive at conclusions which massively preempted later shifts in moral values.
Like, Newton’s theories give you a good way to predict what you’ll see when you throw a ball in the air, but it feels incorrect to me to say that Newton’s goal was to find order in our sensory experience of ball throwing. Do you think that there are in fact ordered moral laws that we’re subject to, which our impulses respond to, and which we’re trying to hone in on?
I’m not saying that’s the explicit goal. I’m saying that in practice, if someone suggests a moral theory which doesn’t reflect how humans actually feel about most actions nobody is going to accept it.
The underlying human drive behind moral theories is to find order in our moral impulses, even if that’s not the system’s goal
I like this framing! The entire point of having a theory is to predict experimental data, and the only way I can collect data is through my senses.
You could construct predictive models of people’s moral impulses. I wouldn’t call these models laws, though.