A baby is not discovering who it is as its mind develops. It is becoming who it will be. This process does not stop before death. At no point can one say, “THIS is who I am” and stop there, imagining that all future change is merely discovering what one already was (despite the new thing being, well, new).
IMHO, utility functions only make sense for “small world” problems: local, well-defined, legible situations for which all possible actions and outcomes are known and complete preferences are possible. For “large worlds” the whole thing falls apart, for multiple reasons which have all been often discussed on LW (although not necessarily with the conclusion that I draw from them). For example, the problems of defining collective utility, self-referential decision theories, non-ergodic decision spaces, game theory with agents reasoning about each other, the observed failures of almost everyone to predict the explosion of various technologies, and the impossibility of limiting the large world to anything less than the whole of one’s future light-cone.
I do not think that any of these will yield merely to “better rationality”.
It’s possible to view utility functions just like probability functions (“probability distributions”), namely as rational restrictions on a subjective state of mind at a particular point in time. Utilities can describe desires, just as probabilities can describe beliefs. That doesn’t cover multi-agent rationality, or diachronic changes over time, but that isn’t much different from probability theory. (Richard Jeffrey’s axiomatization of utility theory is expressed for such a “subjective Bayesian” purpose, but unfortunately it isn’t well known.)
Yeah, when I started studying neuroscience and the genetics of neurons I was kinda mind-blown by just how much change there is throughout the lifetime. There are certain things which are fairly static, like the long-range axons in your brain (aka spanning more than a millimeter). Other things, like the phenotype (the set of expressed genes) and the synapses change from second to second.
Indeed, it caused a bit of a fuss in the neuroscience community when enough evidence was gathered that we had to finally admit that the synapses/dendritic spines in the brain fluctuate too fast and chaotically to be the storage site of learned information that they were long thought to be. Other things may be, such as proteins that remain in place in the cell while the dendritic spine grows and collapses, or certain patterns of gene expression (triggered by reinforced synaptic activity during learning) which code for a propensity to form a synapse in a particular location… we just don’t know at this point.
A baby is not discovering who it is as its mind develops. It is becoming who it will be. This process does not stop before death. At no point can one say, “THIS is who I am” and stop there, imagining that all future change is merely discovering what one already was (despite the new thing being, well, new).
IMHO, utility functions only make sense for “small world” problems: local, well-defined, legible situations for which all possible actions and outcomes are known and complete preferences are possible. For “large worlds” the whole thing falls apart, for multiple reasons which have all been often discussed on LW (although not necessarily with the conclusion that I draw from them). For example, the problems of defining collective utility, self-referential decision theories, non-ergodic decision spaces, game theory with agents reasoning about each other, the observed failures of almost everyone to predict the explosion of various technologies, and the impossibility of limiting the large world to anything less than the whole of one’s future light-cone.
I do not think that any of these will yield merely to “better rationality”.
It’s possible to view utility functions just like probability functions (“probability distributions”), namely as rational restrictions on a subjective state of mind at a particular point in time. Utilities can describe desires, just as probabilities can describe beliefs. That doesn’t cover multi-agent rationality, or diachronic changes over time, but that isn’t much different from probability theory. (Richard Jeffrey’s axiomatization of utility theory is expressed for such a “subjective Bayesian” purpose, but unfortunately it isn’t well known.)
Yeah, when I started studying neuroscience and the genetics of neurons I was kinda mind-blown by just how much change there is throughout the lifetime. There are certain things which are fairly static, like the long-range axons in your brain (aka spanning more than a millimeter). Other things, like the phenotype (the set of expressed genes) and the synapses change from second to second.
Indeed, it caused a bit of a fuss in the neuroscience community when enough evidence was gathered that we had to finally admit that the synapses/dendritic spines in the brain fluctuate too fast and chaotically to be the storage site of learned information that they were long thought to be. Other things may be, such as proteins that remain in place in the cell while the dendritic spine grows and collapses, or certain patterns of gene expression (triggered by reinforced synaptic activity during learning) which code for a propensity to form a synapse in a particular location… we just don’t know at this point.