Giving an example of a phenomenon that is not an optimization is like giving an example of something without a weight. It’s a sliding scale. Everything optimizes, but some things optimize better than others. A pebble doesn’t optimize much of anything. An animal can optimize its inclusive genetic fitness, but not that well. Humans are better at optimizing, but don’t always optimize the same thing, and often work at cross-purposes against themselves.
Could you explain your analogy? In our universe, some things don’t have mass; a sliding scale can have a 0 point, and it might be that in a certain universe almost everything falls on that 0. If your optimization power is 0 on the scale, in a sense you’re an atrocious optimization process; but I think it’s a bit clearer to say that you aren’t an optimization process at all.
I guess you can feasibly get zero with some things, but this is more like hitting distance zero from a bullseye. If you score 0.0001, you’re an atrocious optimizer. If you’re a little worse, you score −0.0001 and you’re actually optimizing for the opposite effect, in which case you have score 0.0001. If you pick an entity and a goal system at random, you probably won’t get a very high score for optimization, and it will be negative half the time, but it will almost never be zero. In order for an entity to not optimize any goal system, it would have to score a perfect zero for every goal system. It’s not going to happen, unless you count “nothing” as your entity.
Wait a minute. Not everything in our universe is real-valued, much less continuous. Unless you’re saying that an optimization goal must produce a well-ordering of possible environment states (which isn’t true for any definition of optimization I’ve ever heard of in an AI context), it should be fairly easy to come up with an objective that generates a cost function returning zero for many possible hypotheses.
For example, “optimize the number of electoral votes I get in the next US presidential election”.
Unless you’re saying that an optimization goal must produce a well-ordering of possible environment states (which isn’t true for any definition of optimization I’ve ever heard of in an AI context)
You mean an ordering? The reals aren’t well-ordered.
If there’s no ordering, there’s circular preferences.
In any case, that’s not what I was talking about.
For example, “optimize the number of electoral votes I gain in the upcoming US presidential election”.
Compare the expected number of electoral votes with and without the optimizer. The difference gives you how powerful the optimizer is, and it will almost never be zero.
That’s equivalent to asserting that the axiom of choice is untrue. That’s not derivable from the other axioms, and in fact the axiom of choice is often used in mathematics. (This is entirely irrelevant to the [long-dead] discussion, however.)
You mean an ordering? The reals aren’t well-ordered.
Shoot, you’re right. I believe I meant a strict ordering; it’s been a while since I last studied set theory.
I’m confused as to what you mean by an optimizer now, though. It sounds like you mean something along the lines of a utility-based agent, but expected utility in this context is an attribute of a hypothesis relative to a model, not of the hypothesis relative to the world, and we’re just as free to define models as we are to define optimization objectives. Previously I’d been thinking in terms of a more general agent, which needn’t use a concept of utility and whose performance relative to an objective is found in retrospect.
Previously I’d been thinking in terms of a more general agent, which needn’t use a concept of utility and whose performance relative to an objective is found in retrospect.
It doesn’t need to use utility explicitly. It’s just whatever objective it tends to gravitate towards.
I’m not entirely sure what you’re saying in the rest of the comment.
The reason I’m talking about “expected value” is that an optimizer must be able to work in a variety of environments. This is equivalent to talking about a probability distribution of environments.
Giving an example of a phenomenon that is not an optimization is like giving an example of something without a weight. It’s a sliding scale. Everything optimizes, but some things optimize better than others. A pebble doesn’t optimize much of anything. An animal can optimize its inclusive genetic fitness, but not that well. Humans are better at optimizing, but don’t always optimize the same thing, and often work at cross-purposes against themselves.
Could you explain your analogy? In our universe, some things don’t have mass; a sliding scale can have a 0 point, and it might be that in a certain universe almost everything falls on that 0. If your optimization power is 0 on the scale, in a sense you’re an atrocious optimization process; but I think it’s a bit clearer to say that you aren’t an optimization process at all.
I guess you can feasibly get zero with some things, but this is more like hitting distance zero from a bullseye. If you score 0.0001, you’re an atrocious optimizer. If you’re a little worse, you score −0.0001 and you’re actually optimizing for the opposite effect, in which case you have score 0.0001. If you pick an entity and a goal system at random, you probably won’t get a very high score for optimization, and it will be negative half the time, but it will almost never be zero. In order for an entity to not optimize any goal system, it would have to score a perfect zero for every goal system. It’s not going to happen, unless you count “nothing” as your entity.
Wait a minute. Not everything in our universe is real-valued, much less continuous. Unless you’re saying that an optimization goal must produce a well-ordering of possible environment states (which isn’t true for any definition of optimization I’ve ever heard of in an AI context), it should be fairly easy to come up with an objective that generates a cost function returning zero for many possible hypotheses.
For example, “optimize the number of electoral votes I get in the next US presidential election”.
You mean an ordering? The reals aren’t well-ordered.
If there’s no ordering, there’s circular preferences.
In any case, that’s not what I was talking about.
Compare the expected number of electoral votes with and without the optimizer. The difference gives you how powerful the optimizer is, and it will almost never be zero.
That’s equivalent to asserting that the axiom of choice is untrue. That’s not derivable from the other axioms, and in fact the axiom of choice is often used in mathematics. (This is entirely irrelevant to the [long-dead] discussion, however.)
Shoot, you’re right. I believe I meant a strict ordering; it’s been a while since I last studied set theory.
I’m confused as to what you mean by an optimizer now, though. It sounds like you mean something along the lines of a utility-based agent, but expected utility in this context is an attribute of a hypothesis relative to a model, not of the hypothesis relative to the world, and we’re just as free to define models as we are to define optimization objectives. Previously I’d been thinking in terms of a more general agent, which needn’t use a concept of utility and whose performance relative to an objective is found in retrospect.
It doesn’t need to use utility explicitly. It’s just whatever objective it tends to gravitate towards.
I’m not entirely sure what you’re saying in the rest of the comment.
The reason I’m talking about “expected value” is that an optimizer must be able to work in a variety of environments. This is equivalent to talking about a probability distribution of environments.
I mean a well-ordering, though I’ll admit that was a bit unclear in context. Possible environment states are a set, not points on the real line.