This is fair. I personally have very low odds on success but it is not a logical impossibility.
I’d say that the probability of success depends on
(1) Conservatism—how much of the prior structure (i.e., what our behavior actually looks like at the moment, how it’s driven by particular shards, etc.). The more conservative you are, the harder it is.
(2) Parametrization—how many moving parts (e.g., values in value consequentialism or virtues in virtue ethics) you allow for in your desired model—the more, the easier.
If you want to explain all of human behavior and reduce it to one metric only, the project is doomed.[1]
For some values of (1) and (2) you can find one or more coherent extrapolations of human values/value concepts. The thing is, often there’s not one extrapolation that is clearly better for one particular person and the greater the number of people whose values you want to extrapolate, the harder it gets. People differ in what extrapolation they would prefer (or even if they would like to extrapolate away from their status quo common sense ethics) due to different genetics, experiences, cultural influences, pragmatic reasons etc.
There may also be some misunderstanding if one side assumes that the project is descriptive (adequately describe all of human behavior with a small set of latent value concepts) or prescriptive (provide a unified, coherent framework that retains some part of our current value system but makes it more principled, robust against moving out of distribution, etc.)
I’d say that the probability of success depends on
(1) Conservatism—how much of the prior structure (i.e., what our behavior actually looks like at the moment, how it’s driven by particular shards, etc.). The more conservative you are, the harder it is.
(2) Parametrization—how many moving parts (e.g., values in value consequentialism or virtues in virtue ethics) you allow for in your desired model—the more, the easier.
If you want to explain all of human behavior and reduce it to one metric only, the project is doomed.[1]
For some values of (1) and (2) you can find one or more coherent extrapolations of human values/value concepts. The thing is, often there’s not one extrapolation that is clearly better for one particular person and the greater the number of people whose values you want to extrapolate, the harder it gets. People differ in what extrapolation they would prefer (or even if they would like to extrapolate away from their status quo common sense ethics) due to different genetics, experiences, cultural influences, pragmatic reasons etc.
There may also be some misunderstanding if one side assumes that the project is descriptive (adequately describe all of human behavior with a small set of latent value concepts) or prescriptive (provide a unified, coherent framework that retains some part of our current value system but makes it more principled, robust against moving out of distribution, etc.)