The new question is: what is the upper bound on bits of optimization gained from a bit of observation? What’s the best-case asymptotic scaling? The counterexample suggests it’s roughly exponential, i.e. one bit of observation can double the number of bits of optimization. On the other hand, it’s not just multiplicative, because our xor example at the top of this post showed a jump from 0 bits of optimization to 1 bit from observing 1 bit.
In an absolute sense, yes, but I expect it can be bounded as a function of bits of optimization without observation. For instance, if we could only at-most double the number of bits of opt by observing one bit, then that would bound bit-gain as a function of bits of optimization without observation, even though it’s unbounded in an absolute sense.
Unless you’re seeing some stronger argument which I have not yet seen?
If we can’t observe O, we could always just guess a particular value of O and then do whatever’s optimal for that value. Then with probability P[O], we’ll be right, and our performance is lower bounded by P[O]*(whatever optimization pressure we’re able to apply if we guess correctly).
The log-number of different policies bounds the log-number of different outcome-distributions we can achieve. And observing one additional bit doubles the log-number of different policies.
It’s possible to construct a counterexample where there’s a step from guessing at random to perfect knowledge after an arbitrary number of observed bits; n-1 bits of evidence are worthless alone and the nth bit lets you perfectly predict the next bit and all future bits.
Consider for example shifting bits in one at a time into the input of a known hash function that’s been initialized with an unknown value (and known width) and I ask you to guess a specified bit from the output; in the idealized case, you know nothing about the output of the function until you learn the final bit in the input (all unknown bits have shifted out) b/c they’re perfectly mixed, and after that you’ll guess every future bit correctly.
Seems like the pathological cases can be arbitrarily messy.
I don’t think that matters, because knowing all but the last bit, I can simply take two actions—action assuming last bit is true, and action assuming it’s false.
Not sure I’m following the setup and notation quite close enough to argue that one way or the other, as far as the order we’re saying the agent receives evidence and has to commit to actions. Above I was considering the simplest case of 1 bit evidence in, 1 bit action out, repeat.
I pretty sure that could be extended to get that one small key/update that unlocks the whole puzzle sort of effect and have the model click all at once. As you say though, not sure that gets to the heart of the matter regarding the bound; it may show that no such bound exists on the margin, the last piece can be much more valuable on the margin than all the prior pieces of evidence, but not necessarily in a way that violates the proposed bound overall. Maybe we have to see that last piece as unlocking some bounded amount of value from your prior observations.
I think it depends on the size of the world model. Imagine an agent with a branch due to uncertainty between two world models. It can construct these models in parallel but doesn’t know which one is true. Every observation it makes has two interpretations. A single observation which conclusively determines which branch world model was correct I think could produce an arbitrarily large but resource bounded update.
The new question is: what is the upper bound on bits of optimization gained from a bit of observation? What’s the best-case asymptotic scaling? The counterexample suggests it’s roughly exponential, i.e. one bit of observation can double the number of bits of optimization. On the other hand, it’s not just multiplicative, because our xor example at the top of this post showed a jump from 0 bits of optimization to 1 bit from observing 1 bit.
Isn’t it unbounded?
In an absolute sense, yes, but I expect it can be bounded as a function of bits of optimization without observation. For instance, if we could only at-most double the number of bits of opt by observing one bit, then that would bound bit-gain as a function of bits of optimization without observation, even though it’s unbounded in an absolute sense.
Unless you’re seeing some stronger argument which I have not yet seen?
The scaling would also be unbounded, at least that would be my default assumption without solid proof otherwise.
In other words I don’t see any reason to assume there must be any hard cap, whether at 2x or 10x or 100x, etc...
Here are two intuitive arguments:
If we can’t observe O, we could always just guess a particular value of O and then do whatever’s optimal for that value. Then with probability P[O], we’ll be right, and our performance is lower bounded by P[O]*(whatever optimization pressure we’re able to apply if we guess correctly).
The log-number of different policies bounds the log-number of different outcome-distributions we can achieve. And observing one additional bit doubles the log-number of different policies.
It’s possible to construct a counterexample where there’s a step from guessing at random to perfect knowledge after an arbitrary number of observed bits; n-1 bits of evidence are worthless alone and the nth bit lets you perfectly predict the next bit and all future bits.
Consider for example shifting bits in one at a time into the input of a known hash function that’s been initialized with an unknown value (and known width) and I ask you to guess a specified bit from the output; in the idealized case, you know nothing about the output of the function until you learn the final bit in the input (all unknown bits have shifted out) b/c they’re perfectly mixed, and after that you’ll guess every future bit correctly.
Seems like the pathological cases can be arbitrarily messy.
I don’t think that matters, because knowing all but the last bit, I can simply take two actions—action assuming last bit is true, and action assuming it’s false.
Not sure I’m following the setup and notation quite close enough to argue that one way or the other, as far as the order we’re saying the agent receives evidence and has to commit to actions. Above I was considering the simplest case of 1 bit evidence in, 1 bit action out, repeat.
I pretty sure that could be extended to get that one small key/update that unlocks the whole puzzle sort of effect and have the model click all at once. As you say though, not sure that gets to the heart of the matter regarding the bound; it may show that no such bound exists on the margin, the last piece can be much more valuable on the margin than all the prior pieces of evidence, but not necessarily in a way that violates the proposed bound overall. Maybe we have to see that last piece as unlocking some bounded amount of value from your prior observations.
I think it depends on the size of the world model. Imagine an agent with a branch due to uncertainty between two world models. It can construct these models in parallel but doesn’t know which one is true. Every observation it makes has two interpretations. A single observation which conclusively determines which branch world model was correct I think could produce an arbitrarily large but resource bounded update.