As far as I can tell, human intuition is the territory you would be modelling, here. In particular, when dealing with counterfactuals, since it would be unethical to actually set up trolley problems.
BTW, there is nothing to stop moral philosophy being predictive, etc.
If you have an extenal standard, as you do with probability theory and logic, system 2 can learn utilitarianism, and its performance can be checked against the external standard.
But we don’t have an agreed standard to compare system 1 ethical reasoning against, because we haven’t solved ,moral philosophy. What we have is system 1 coming up with speculative theories,which have to be checked against intuition, meaning an internal standard
Again, the whole point of this task/project/thing is to come up with an explicit theory to act as an external standard for ethics. Ethical theories are maps of the evaluative-under-full-information-and-individual+social-rationality territory.
Again, the whole point of this task/project/thing is to come up with an explicit theory to act as an external standard for ethics.
And that is the whole point of moral philosophy..… so it’s sounding like a moot distinction.
Ethical theories are maps of the evaluative-under-full-information-and-individual+social-rationality territory.
You don’t like the word intuition, but the fact remains that while you are building your theory, you will have to check it against humans ability to give answers without knowing how they arrived at them. Otherwise you end up with a clear, consistent theory that nobody finds persuasive.
As far as I can tell, human intuition is the territory you would be modelling, here. In particular, when dealing with counterfactuals, since it would be unethical to actually set up trolley problems.
BTW, there is nothing to stop moral philosophy being predictive, etc.
No, we’re trying to capture System 2′s evaluative cognition, not System 1′s fast-and-loose, bias-governed intuitions.
Wrong kind of intuition
If you have an extenal standard, as you do with probability theory and logic, system 2 can learn utilitarianism, and its performance can be checked against the external standard.
But we don’t have an agreed standard to compare system 1 ethical reasoning against, because we haven’t solved ,moral philosophy. What we have is system 1 coming up with speculative theories,which have to be checked against intuition, meaning an internal standard
Again, the whole point of this task/project/thing is to come up with an explicit theory to act as an external standard for ethics. Ethical theories are maps of the evaluative-under-full-information-and-individual+social-rationality territory.
And that is the whole point of moral philosophy..… so it’s sounding like a moot distinction.
You don’t like the word intuition, but the fact remains that while you are building your theory, you will have to check it against humans ability to give answers without knowing how they arrived at them. Otherwise you end up with a clear, consistent theory that nobody finds persuasive.
Such a territory does not exist, therefore it’s not territory.
You’re going to have to explain how “thoughts and feelings that people will or would have in certain scenarios” fails to be territory.
By not existing.