Sometimes people think I have a “utility function” that is small and is basically “inside me,” and that I also have a set of beliefs/predictions/anticipations that is large, richly informed by experience, and basically a pointer to stuff outside of me.
I don’t see a good justification for this asymmetry.
Having lived many years, I have accumulated a good many beliefs/predictions/anticipations about outside events: I believe I’m sitting at a desk, that Biden is president, that 2+3=5, and so on and so on. These beliefs came about via colliding a (perhaps fairly simple, I’m not sure) neural processing pattern with a huge number of neurons and a huge amount of world. (Via repeated conscious effort to make sense of things, partly.)
I also have a good deal of specific preference, stored in my ~perceptions of “good”: this chocolate is “better” than that one; this short story is “excellent” while that one is “meh”; such-and-such a friendship is “deeply enriching” to me; this theorem is “elegant, pivotal, very cool” and that code has good code-smell while this other theorem and that other code are merely meh; etc.
My guess is that my perceptions of which things are “good” encodes quite a lot pattern that really is in the outside world, much like my perceptions of which things are “true/real/good predictions.”
My guess is that it’s confused to say my perceptions of which things are “good” is mostly about my utility function, in much the same way that it’s confused to say that my predictions about the world is mostly about my neural processing pattern (instead of acknowledging that it’s a lot about the world I’ve been encountering, and that e.g. the cause of my belief that I’m currently sitting at a desk is mostly that I’m currently sitting at a desk).
Resonating from some of the OP:
Sometimes people think I have a “utility function” that is small and is basically “inside me,” and that I also have a set of beliefs/predictions/anticipations that is large, richly informed by experience, and basically a pointer to stuff outside of me.
I don’t see a good justification for this asymmetry.
Having lived many years, I have accumulated a good many beliefs/predictions/anticipations about outside events: I believe I’m sitting at a desk, that Biden is president, that 2+3=5, and so on and so on. These beliefs came about via colliding a (perhaps fairly simple, I’m not sure) neural processing pattern with a huge number of neurons and a huge amount of world. (Via repeated conscious effort to make sense of things, partly.)
I also have a good deal of specific preference, stored in my ~perceptions of “good”: this chocolate is “better” than that one; this short story is “excellent” while that one is “meh”; such-and-such a friendship is “deeply enriching” to me; this theorem is “elegant, pivotal, very cool” and that code has good code-smell while this other theorem and that other code are merely meh; etc.
My guess is that my perceptions of which things are “good” encodes quite a lot pattern that really is in the outside world, much like my perceptions of which things are “true/real/good predictions.”
My guess is that it’s confused to say my perceptions of which things are “good” is mostly about my utility function, in much the same way that it’s confused to say that my predictions about the world is mostly about my neural processing pattern (instead of acknowledging that it’s a lot about the world I’ve been encountering, and that e.g. the cause of my belief that I’m currently sitting at a desk is mostly that I’m currently sitting at a desk).