Agreed that ultimately everything is reverse-engineered, because we don’t live in a vacuum. However, I feel like there’s a meaningful distinction between: 1. let me reverse engineer the principles that best describe our moral intuition, and let me allow parsimonious principles to make me think twice about the moral contradictions that our actual behavior often implies, and perhaps even allow my behavior to change as a result 2. let me concoct a set of rules and exceptions that will justify the particular outcome I want, which is often the one that best suits me
For example, consider the contrast between “we should always strive to treat others fairly” and “we should treat others fairly when they are more powerful than us, however if they are weaker let us then do to them whatever is in our best interest whether or not it is unfair, while at the same time paying lip service to fairness in hopes that we cajole those more powerful than us into treating us fairly”. I find the former a less corrupted piece of moral logic than the latter even though the latter arguably describes actual behavior fairly well. The former compresses more neatly, which isn’t a coincidence.
There’s something of a [bias-variance tradeoff](https://en.wikipedia.org/wiki/Bias%E2%80%93variance_tradeoff) here. The smaller the moral model, the less expressive it can be (so the more nuance it misses), but the more helpful it will be on future, out-of-distribution questions.
Agreed that ultimately everything is reverse-engineered, because we don’t live in a vacuum.
My point was not that we don’t live in a vacuum, but that there’s no ground truth or “correct” model. We’re ONLY extrapolating from very limited experienced examples, not understanding anything fundamental.
For example, consider the contrast between “we should always strive to treat others fairly” and “we should treat others fairly when they are more powerful than us, however if they are weaker let us then do to them whatever is in our best interest whether or not it is unfair, while at the same time paying lip service to fairness in hopes that we cajole those more powerful than us into treating us fairly”.
When you see the word “should”, you know you’re in preferences and modeling land, right?
Agreed that ultimately everything is reverse-engineered, because we don’t live in a vacuum. However, I feel like there’s a meaningful distinction between:
1. let me reverse engineer the principles that best describe our moral intuition, and let me allow parsimonious principles to make me think twice about the moral contradictions that our actual behavior often implies, and perhaps even allow my behavior to change as a result
2. let me concoct a set of rules and exceptions that will justify the particular outcome I want, which is often the one that best suits me
For example, consider the contrast between “we should always strive to treat others fairly” and “we should treat others fairly when they are more powerful than us, however if they are weaker let us then do to them whatever is in our best interest whether or not it is unfair, while at the same time paying lip service to fairness in hopes that we cajole those more powerful than us into treating us fairly”. I find the former a less corrupted piece of moral logic than the latter even though the latter arguably describes actual behavior fairly well. The former compresses more neatly, which isn’t a coincidence.
There’s something of a [bias-variance tradeoff](https://en.wikipedia.org/wiki/Bias%E2%80%93variance_tradeoff) here. The smaller the moral model, the less expressive it can be (so the more nuance it misses), but the more helpful it will be on future, out-of-distribution questions.
My point was not that we don’t live in a vacuum, but that there’s no ground truth or “correct” model. We’re ONLY extrapolating from very limited experienced examples, not understanding anything fundamental.
When you see the word “should”, you know you’re in preferences and modeling land, right?