You mean an ordering? The reals aren’t well-ordered.
Shoot, you’re right. I believe I meant a strict ordering; it’s been a while since I last studied set theory.
I’m confused as to what you mean by an optimizer now, though. It sounds like you mean something along the lines of a utility-based agent, but expected utility in this context is an attribute of a hypothesis relative to a model, not of the hypothesis relative to the world, and we’re just as free to define models as we are to define optimization objectives. Previously I’d been thinking in terms of a more general agent, which needn’t use a concept of utility and whose performance relative to an objective is found in retrospect.
Previously I’d been thinking in terms of a more general agent, which needn’t use a concept of utility and whose performance relative to an objective is found in retrospect.
It doesn’t need to use utility explicitly. It’s just whatever objective it tends to gravitate towards.
I’m not entirely sure what you’re saying in the rest of the comment.
The reason I’m talking about “expected value” is that an optimizer must be able to work in a variety of environments. This is equivalent to talking about a probability distribution of environments.
Shoot, you’re right. I believe I meant a strict ordering; it’s been a while since I last studied set theory.
I’m confused as to what you mean by an optimizer now, though. It sounds like you mean something along the lines of a utility-based agent, but expected utility in this context is an attribute of a hypothesis relative to a model, not of the hypothesis relative to the world, and we’re just as free to define models as we are to define optimization objectives. Previously I’d been thinking in terms of a more general agent, which needn’t use a concept of utility and whose performance relative to an objective is found in retrospect.
It doesn’t need to use utility explicitly. It’s just whatever objective it tends to gravitate towards.
I’m not entirely sure what you’re saying in the rest of the comment.
The reason I’m talking about “expected value” is that an optimizer must be able to work in a variety of environments. This is equivalent to talking about a probability distribution of environments.