One more thing: you model assumes that mental models of situations are actually preexisting. However, imagine a preference between tea and coffee. Before I was asked, I don’t have any model and don’t have any preference. So I will generate some random model, like large coffee and small tea, and when make a choice. However, the mental model I generate depends on framing of the question.
In some sense, here we are passing the buck of complexity from “values” to “mental models”, which are assumed to be stable and actually existing entities. However, we still don’t know what is a separate “mental model”, where it is located in the brain, how it is actually encoded in neurons.
The human might have some taste preferences that will determine between tea and coffee, general hedonism preferences that might also work, and meta-preferences about how they should deal with future choices.
Part of the research agenda—“grounding symbols”—about trying to determine where these models are located.
One more thing: you model assumes that mental models of situations are actually preexisting. However, imagine a preference between tea and coffee. Before I was asked, I don’t have any model and don’t have any preference. So I will generate some random model, like large coffee and small tea, and when make a choice. However, the mental model I generate depends on framing of the question.
In some sense, here we are passing the buck of complexity from “values” to “mental models”, which are assumed to be stable and actually existing entities. However, we still don’t know what is a separate “mental model”, where it is located in the brain, how it is actually encoded in neurons.
The human might have some taste preferences that will determine between tea and coffee, general hedonism preferences that might also work, and meta-preferences about how they should deal with future choices.
Part of the research agenda—“grounding symbols”—about trying to determine where these models are located.