vNM coherence arguments etc say something like “you better have your preferences satisfy those criteria because otherwise you might get exploited or miss out on opportunities for profit”.
I have my gripes with parts of it but to the extent these arguments hold some water (and I do think they hold some water) they assume that there’s no other pressures acting on the mind (or mind-generating process or something) or [reasons to be shaped like this instead of being shaped like that] that act along or interact with those vNM-ish pressures.
Various forms of boundedness are the most obvious example, though not very interesting. A more interesting example is the need to have an updateless component in one’s decision theory.[1] Plausibly there’s also the thing about acquiring deontology/virtue-ethics making the agent easier to cooperate with.
So I think that it’s better to think of vNM-ish pressures as being one category of pressures acting on the ~mind, than to think of a vNM agent as one of the final-agent-type options. You get the latter from the former if you assume away all other pressures but the pressures view is more foundational IMO.
Updatelessness is inconsistent with an assumption of decision tree separability that is a foundation for the money pump arguments for vNM, at least the ones used by Gustafsson.
vNM coherence arguments etc say something like “you better have your preferences satisfy those criteria because otherwise you might get exploited or miss out on opportunities for profit”.
I have my gripes with parts of it but to the extent these arguments hold some water (and I do think they hold some water) they assume that there’s no other pressures acting on the mind (or mind-generating process or something) or [reasons to be shaped like this instead of being shaped like that] that act along or interact with those vNM-ish pressures.
Various forms of boundedness are the most obvious example, though not very interesting. A more interesting example is the need to have an updateless component in one’s decision theory.[1] Plausibly there’s also the thing about acquiring deontology/virtue-ethics making the agent easier to cooperate with.
So I think that it’s better to think of vNM-ish pressures as being one category of pressures acting on the ~mind, than to think of a vNM agent as one of the final-agent-type options. You get the latter from the former if you assume away all other pressures but the pressures view is more foundational IMO.
Updatelessness is inconsistent with an assumption of decision tree separability that is a foundation for the money pump arguments for vNM, at least the ones used by Gustafsson.