Tim: no, I’d think of it in reverse, that a utility function is a very special type of encoding for a set of preferences.
Again, I’m not denying that I think I have an intuitive sense of what I think I mean by the term. It’s just that when I try to reduce it from something mind to something non mind, the best I can come up with is stuff like “that which an optimization process selects for”
At which point I have to declare everything an optimization process in some sense. (=I’m actually semisorta tempted to do this, to talk about optimization power as a property of processes in general, rather than distinguishing certain types of processes as optimization processes. This way I think I’d have a reasonably serviceable reduction of the notion of a preference. Except then with intelligent agents that aren’t logically omnicient and, say, can’t yet fully compute their morality (or primality or whatever as appropriate) and thus in a sense don’t actually fully know their preferences.
Well, there’s hopefully enough here to illustrate my confusion sufficiently that you or someone who’s actually worked out the correct answer can help me out here. I’m annoyed that I don’t know this. :)
Tim: no, I’d think of it in reverse, that a utility function is a very special type of encoding for a set of preferences.
Again, I’m not denying that I think I have an intuitive sense of what I think I mean by the term. It’s just that when I try to reduce it from something mind to something non mind, the best I can come up with is stuff like “that which an optimization process selects for”
At which point I have to declare everything an optimization process in some sense. (=I’m actually semisorta tempted to do this, to talk about optimization power as a property of processes in general, rather than distinguishing certain types of processes as optimization processes. This way I think I’d have a reasonably serviceable reduction of the notion of a preference. Except then with intelligent agents that aren’t logically omnicient and, say, can’t yet fully compute their morality (or primality or whatever as appropriate) and thus in a sense don’t actually fully know their preferences.
Well, there’s hopefully enough here to illustrate my confusion sufficiently that you or someone who’s actually worked out the correct answer can help me out here. I’m annoyed that I don’t know this. :)