So here’s an alternative explanation on what proto-preferences and preferences are, which is to say what is the process that produces something we might meaningfully reify using the “preference” construct.
Preferences are a model for answering questions about “why do this and not that?”. There’s a lot going on in this model, though, because in order to choose what to do we have to even be able to form a this and that to choose between. If we strip away the this and that (the ontological), we are left with not what is (the ontic), but instead the liminal ontology naturally implied by sense contact and the production of phenomena and experience prior to understanding it (e.g. the way you perceive color already creates a separation between what is and what you perceive by encoding interactions with what is in less bits that it would take to express an exact simulation of it). This process is mostly beyond conscious control in humans we so we tend to think of it as automatic, outside the locus-of-control, not part of the self, and thus not part of our felt sense of preference, but it’s important because it’s the first time we “make” a “choice”, and choice is what preference is all about.
So how do these choices get made? There are many principles we might derive to explain why we perceive things one way or another, but the one that to me seems most parsimonious and maximally descriptive is minimization of uncertainty, which to really cache out at this level probably requires some additional effort to deconstruct what that means in a sensible way that doesn’t fall apart the way “minimize description length” seems to because it ignores the way sometimes minimizing uncertainty over a long term requires not minimizing uncertainty over a short term (avoiding local minima) and other caveats that make too simple an explanation incomplete. Although I mostly draw on philosophy I’m not explaining here to come to this point, see Friston’s free energy, perceptual control theory, etc. for related notions and support.
This gives us a kind of low level operation then that can power preferences, which get built up at the next level of ontological abstraction (what we might call feeling or sensation), which is the encoding of a judgement about success or failure at minimizing uncertainty and could either be positive (below some threshold of minimization), negative (over some threshold), or neutral (within error bounds and unable to rule either way). From here we can build up to more complex sorts of preferences over additional levels of abstraction, but they will all be rooted in judgements about whether or not uncertainty was minimized at the perceptual level, keeping in mind that the brain senses itself through circular networks of neurons allowing it to perceive itself and thus apply this same process to perceptions we reify as “thoughts”.
What does this suggest for this discussion? I think it offers a way to dissolve many of the confusions arising from trying to work with our normally reified notions of “preference” or even the simpler but less cleanly bounded notion of “proto-preference”.
(This was a convenient opportunity to work out some of these ideas in writing since this conversation provided a nice germ to build around. I’ll probably refine and expand on this idea elsewhere later.)
So here’s an alternative explanation on what proto-preferences and preferences are, which is to say what is the process that produces something we might meaningfully reify using the “preference” construct.
Preferences are a model for answering questions about “why do this and not that?”. There’s a lot going on in this model, though, because in order to choose what to do we have to even be able to form a this and that to choose between. If we strip away the this and that (the ontological), we are left with not what is (the ontic), but instead the liminal ontology naturally implied by sense contact and the production of phenomena and experience prior to understanding it (e.g. the way you perceive color already creates a separation between what is and what you perceive by encoding interactions with what is in less bits that it would take to express an exact simulation of it). This process is mostly beyond conscious control in humans we so we tend to think of it as automatic, outside the locus-of-control, not part of the self, and thus not part of our felt sense of preference, but it’s important because it’s the first time we “make” a “choice”, and choice is what preference is all about.
So how do these choices get made? There are many principles we might derive to explain why we perceive things one way or another, but the one that to me seems most parsimonious and maximally descriptive is minimization of uncertainty, which to really cache out at this level probably requires some additional effort to deconstruct what that means in a sensible way that doesn’t fall apart the way “minimize description length” seems to because it ignores the way sometimes minimizing uncertainty over a long term requires not minimizing uncertainty over a short term (avoiding local minima) and other caveats that make too simple an explanation incomplete. Although I mostly draw on philosophy I’m not explaining here to come to this point, see Friston’s free energy, perceptual control theory, etc. for related notions and support.
This gives us a kind of low level operation then that can power preferences, which get built up at the next level of ontological abstraction (what we might call feeling or sensation), which is the encoding of a judgement about success or failure at minimizing uncertainty and could either be positive (below some threshold of minimization), negative (over some threshold), or neutral (within error bounds and unable to rule either way). From here we can build up to more complex sorts of preferences over additional levels of abstraction, but they will all be rooted in judgements about whether or not uncertainty was minimized at the perceptual level, keeping in mind that the brain senses itself through circular networks of neurons allowing it to perceive itself and thus apply this same process to perceptions we reify as “thoughts”.
What does this suggest for this discussion? I think it offers a way to dissolve many of the confusions arising from trying to work with our normally reified notions of “preference” or even the simpler but less cleanly bounded notion of “proto-preference”.
(This was a convenient opportunity to work out some of these ideas in writing since this conversation provided a nice germ to build around. I’ll probably refine and expand on this idea elsewhere later.)