I don’t know of any encodings or legibile descriptions of ethics that AREN’T reverse-engineered. Unless you’re a moral realist, I suspect this has to be the case, because such systems are in the map, not the territory. And not even in the most detailed maps, they’re massively abstracted over other abstractions.
I’m far more suspicious of simple descriptions, especially when the object space has many more dimensions. The likelihood that they’ve missed important things about the actual behavior/observations is extremely high.
Agreed that ultimately everything is reverse-engineered, because we don’t live in a vacuum. However, I feel like there’s a meaningful distinction between: 1. let me reverse engineer the principles that best describe our moral intuition, and let me allow parsimonious principles to make me think twice about the moral contradictions that our actual behavior often implies, and perhaps even allow my behavior to change as a result 2. let me concoct a set of rules and exceptions that will justify the particular outcome I want, which is often the one that best suits me
For example, consider the contrast between “we should always strive to treat others fairly” and “we should treat others fairly when they are more powerful than us, however if they are weaker let us then do to them whatever is in our best interest whether or not it is unfair, while at the same time paying lip service to fairness in hopes that we cajole those more powerful than us into treating us fairly”. I find the former a less corrupted piece of moral logic than the latter even though the latter arguably describes actual behavior fairly well. The former compresses more neatly, which isn’t a coincidence.
There’s something of a [bias-variance tradeoff](https://en.wikipedia.org/wiki/Bias%E2%80%93variance_tradeoff) here. The smaller the moral model, the less expressive it can be (so the more nuance it misses), but the more helpful it will be on future, out-of-distribution questions.
Agreed that ultimately everything is reverse-engineered, because we don’t live in a vacuum.
My point was not that we don’t live in a vacuum, but that there’s no ground truth or “correct” model. We’re ONLY extrapolating from very limited experienced examples, not understanding anything fundamental.
For example, consider the contrast between “we should always strive to treat others fairly” and “we should treat others fairly when they are more powerful than us, however if they are weaker let us then do to them whatever is in our best interest whether or not it is unfair, while at the same time paying lip service to fairness in hopes that we cajole those more powerful than us into treating us fairly”.
When you see the word “should”, you know you’re in preferences and modeling land, right?
I don’t know of any encodings or legibile descriptions of ethics that AREN’T reverse-engineered. Unless you’re a moral realist, I suspect this has to be the case, because such systems are in the map, not the territory. And not even in the most detailed maps, they’re massively abstracted over other abstractions.
I’m far more suspicious of simple descriptions, especially when the object space has many more dimensions. The likelihood that they’ve missed important things about the actual behavior/observations is extremely high.
Agreed that ultimately everything is reverse-engineered, because we don’t live in a vacuum. However, I feel like there’s a meaningful distinction between:
1. let me reverse engineer the principles that best describe our moral intuition, and let me allow parsimonious principles to make me think twice about the moral contradictions that our actual behavior often implies, and perhaps even allow my behavior to change as a result
2. let me concoct a set of rules and exceptions that will justify the particular outcome I want, which is often the one that best suits me
For example, consider the contrast between “we should always strive to treat others fairly” and “we should treat others fairly when they are more powerful than us, however if they are weaker let us then do to them whatever is in our best interest whether or not it is unfair, while at the same time paying lip service to fairness in hopes that we cajole those more powerful than us into treating us fairly”. I find the former a less corrupted piece of moral logic than the latter even though the latter arguably describes actual behavior fairly well. The former compresses more neatly, which isn’t a coincidence.
There’s something of a [bias-variance tradeoff](https://en.wikipedia.org/wiki/Bias%E2%80%93variance_tradeoff) here. The smaller the moral model, the less expressive it can be (so the more nuance it misses), but the more helpful it will be on future, out-of-distribution questions.
My point was not that we don’t live in a vacuum, but that there’s no ground truth or “correct” model. We’re ONLY extrapolating from very limited experienced examples, not understanding anything fundamental.
When you see the word “should”, you know you’re in preferences and modeling land, right?