Alright, I think we have an answer! The conjecture is false.
Counterexample: suppose I have a very-high-capacity information channel (N bit capacity), but it’s guarded by a uniform random n-bit password. O is the password, A is an N-bit message and a guess at the n-bit password. Y is the N-bit message part of A if the password guess matches O; otherwise, Y is 0.
Let’s say the password is 50 bits and the message is 1M bits. If A is independent of the password, then there’s a 2−50 chance of guessing the password, so the bitrate will be about 2−50 * 1M ≈2−30, or about one-billionth of a bit in expectation.
If A “knows” the password, then the capacity is about 1M bits. So, the delta from knowing the password is a lot more than 50 bits. It’s a a multiplier of 250, rather than an addition of 50 bits.
This is really cool! It means that bits of observation can give a really ridiculously large boost to a system’s optimization power. Making actions depend on observations is a potentially-very-big game, even with just a few bits of observation.
Credit to Yair Halberstadt in the comments for the attempted-counterexamples which provided stepping stones to this one.
This is interesting, but it also feels a bit somehow like a “cheat” compared to the more “real” version of this problem (namely, if I know something about the world and can think intelligently about it, how much leverage can I get out of it?).
The kind of system in which you can pack so much information in an action and at the cost of a small bit of information you get so much leverage feels like it ought to be artificial. Trivially, this is actually what makes a lock (real or virtual) work: if you have one simple key/password, you get to do whatever with the contents. But the world as a whole doesn’t seem to work as a locked system (if it did, we would have magic: just a tiny, specific formula or gesture and we get massive results down the line).
I wonder if the key here isn’t in the entropy. Your knowing O here allows you to significantly reduce the entropy of the world as a whole. This feels akin to being a Maxwell demon. In the physical world though there are bounds on that sort of observation and action exactly because to be able to do them would allow you to violate the 2nd principle of thermodynamics. So I wonder if the conjecture may be true under some additional constraints which also include these common properties of macroscopic closed physical systems (while it remains false in artificial subsystems that we can build for the purpose, in which we only care about certain bits and not all the ones defining the underlying physical microstates).
I’m not sure about that; it seems like there’s lots of instances where just a few bits of knowledge gets you lots of optimization power. Knowing Maxwell’s equations lets you do electronics, and knowing which catalyst to use for the Haber process lets you make lots of food and bombs. If I encoded the instructions for making a nanofactory, that would probably be few bits compared to the amount of optimization you could do with that knowledge.
The important thing is that your relevant information isn’t about the state of the world, it’s about the laws. That’s the evolution map f, not the region O (going by the nomenclature I used in my other comment). Your knowledge about O when using the Haber process is actually roughly proportional to the output: you need to know that inside tank X there is such-and-such precursor, and it’s pure to a certain degree. That’s like knowing that a certain region of the bit string is prepared purely with 1s. But the laws are an interesting thing because they can have regularities (in fact, we do know they have them), so that they can be represented in compressed form, and you can exploit that knowledge. But also, to actually represent that knowledge in bits of world-knowledge you’d need to represent the state of all the experiments that were performed and from which that knowledge was inferred and generalized. Though volume wise, that’s still less than the applications… unless you count each application also as a further validation of the model that updates your confidence in it, at which point by definition the bits of knowledge backing the model are always more than the bits of order you got out of it.
Ah, interesting. If I were going down that path, I’d probably aim to use a Landauer-style argument. Something like, “here’s a bound on mutual information between the policy and the whole world, including the agent itself”. And then a lock/password could give us a lot more optimization power over one particular part of the world, but not over the world as a whole.
… I’m not sure how to make something like that nontrivial, though. Problem is, the policy itself would then presumably be embedded in the world, so I(π,world) is just H(π).
Here’s my immediate thought on it: you define a single world bit string W, and A, O and Y are just designated subsections of it. You are able to know only the contents of O, and can set the contents of A (this feels like it’s reducing the entropy of the whole world btw, so you could also postulate that you can only do so by drawing free energy from some other region, your fuel F: for each bit of A you set deterministically, you need to randomize two of F, so that the overall entropy increases). After this, some kind of map W→f(W) is applied repeatedly, evolving the system until such time comes to check that the region Y is indeed as close as possible to your goal configuration G. I think at this point the properties of the result will depend on the properties of the map—is it a “lock” map like your suggested one (compare a region of A with O, and if they’re identical, clone the rest of A into Y, possibly using up F to keep the entropy increase positive?). Is it reversible, is it chaotic?
Yeah, not sure, I need to think about it. Reversibility (even acting as if these were qubits and not simple bits) might be the key here. In general I think there can’t be any hard rule against lock-like maps, because the real world allows building locks. But maybe there’s some rule about how if you define the map itself randomly enough, it probably won’t be a lock-map (for example, you could define a map as a series of operations on two bits writing to a third one op(i,j)→k; decide a region of your world for it, encode bit indices and operators as bit strings, and you can make the map’s program itself a part of the world, and then define what makes a map a lock-like map and how probable that occurrence is).
The new question is: what is the upper bound on bits of optimization gained from a bit of observation? What’s the best-case asymptotic scaling? The counterexample suggests it’s roughly exponential, i.e. one bit of observation can double the number of bits of optimization. On the other hand, it’s not just multiplicative, because our xor example at the top of this post showed a jump from 0 bits of optimization to 1 bit from observing 1 bit.
I think it depends on the size of the world model. Imagine an agent with a branch due to uncertainty between two world models. It can construct these models in parallel but doesn’t know which one is true. Every observation it makes has two interpretations. A single observation which conclusively determines which branch world model was correct I think could produce an arbitrarily large but resource bounded update.
In an absolute sense, yes, but I expect it can be bounded as a function of bits of optimization without observation. For instance, if we could only at-most double the number of bits of opt by observing one bit, then that would bound bit-gain as a function of bits of optimization without observation, even though it’s unbounded in an absolute sense.
Unless you’re seeing some stronger argument which I have not yet seen?
If we can’t observe O, we could always just guess a particular value of O and then do whatever’s optimal for that value. Then with probability P[O], we’ll be right, and our performance is lower bounded by P[O]*(whatever optimization pressure we’re able to apply if we guess correctly).
The log-number of different policies bounds the log-number of different outcome-distributions we can achieve. And observing one additional bit doubles the log-number of different policies.
It’s possible to construct a counterexample where there’s a step from guessing at random to perfect knowledge after an arbitrary number of observed bits; n-1 bits of evidence are worthless alone and the nth bit lets you perfectly predict the next bit and all future bits.
Consider for example shifting bits in one at a time into the input of a known hash function that’s been initialized with an unknown value (and known width) and I ask you to guess a specified bit from the output; in the idealized case, you know nothing about the output of the function until you learn the final bit in the input (all unknown bits have shifted out) b/c they’re perfectly mixed, and after that you’ll guess every future bit correctly.
Seems like the pathological cases can be arbitrarily messy.
I don’t think that matters, because knowing all but the last bit, I can simply take two actions—action assuming last bit is true, and action assuming it’s false.
Not sure I’m following the setup and notation quite close enough to argue that one way or the other, as far as the order we’re saying the agent receives evidence and has to commit to actions. Above I was considering the simplest case of 1 bit evidence in, 1 bit action out, repeat.
I pretty sure that could be extended to get that one small key/update that unlocks the whole puzzle sort of effect and have the model click all at once. As you say though, not sure that gets to the heart of the matter regarding the bound; it may show that no such bound exists on the margin, the last piece can be much more valuable on the margin than all the prior pieces of evidence, but not necessarily in a way that violates the proposed bound overall. Maybe we have to see that last piece as unlocking some bounded amount of value from your prior observations.
Alright, I think we have an answer! The conjecture is false.
Counterexample: suppose I have a very-high-capacity information channel (N bit capacity), but it’s guarded by a uniform random n-bit password. O is the password, A is an N-bit message and a guess at the n-bit password. Y is the N-bit message part of A if the password guess matches O; otherwise, Y is 0.
Let’s say the password is 50 bits and the message is 1M bits. If A is independent of the password, then there’s a 2−50 chance of guessing the password, so the bitrate will be about 2−50 * 1M ≈ 2−30, or about one-billionth of a bit in expectation.
If A “knows” the password, then the capacity is about 1M bits. So, the delta from knowing the password is a lot more than 50 bits. It’s a a multiplier of 250, rather than an addition of 50 bits.
This is really cool! It means that bits of observation can give a really ridiculously large boost to a system’s optimization power. Making actions depend on observations is a potentially-very-big game, even with just a few bits of observation.
Credit to Yair Halberstadt in the comments for the attempted-counterexamples which provided stepping stones to this one.
This is interesting, but it also feels a bit somehow like a “cheat” compared to the more “real” version of this problem (namely, if I know something about the world and can think intelligently about it, how much leverage can I get out of it?).
The kind of system in which you can pack so much information in an action and at the cost of a small bit of information you get so much leverage feels like it ought to be artificial. Trivially, this is actually what makes a lock (real or virtual) work: if you have one simple key/password, you get to do whatever with the contents. But the world as a whole doesn’t seem to work as a locked system (if it did, we would have magic: just a tiny, specific formula or gesture and we get massive results down the line).
I wonder if the key here isn’t in the entropy. Your knowing O here allows you to significantly reduce the entropy of the world as a whole. This feels akin to being a Maxwell demon. In the physical world though there are bounds on that sort of observation and action exactly because to be able to do them would allow you to violate the 2nd principle of thermodynamics. So I wonder if the conjecture may be true under some additional constraints which also include these common properties of macroscopic closed physical systems (while it remains false in artificial subsystems that we can build for the purpose, in which we only care about certain bits and not all the ones defining the underlying physical microstates).
I’m not sure about that; it seems like there’s lots of instances where just a few bits of knowledge gets you lots of optimization power. Knowing Maxwell’s equations lets you do electronics, and knowing which catalyst to use for the Haber process lets you make lots of food and bombs. If I encoded the instructions for making a nanofactory, that would probably be few bits compared to the amount of optimization you could do with that knowledge.
The important thing is that your relevant information isn’t about the state of the world, it’s about the laws. That’s the evolution map f, not the region O (going by the nomenclature I used in my other comment). Your knowledge about O when using the Haber process is actually roughly proportional to the output: you need to know that inside tank X there is such-and-such precursor, and it’s pure to a certain degree. That’s like knowing that a certain region of the bit string is prepared purely with 1s. But the laws are an interesting thing because they can have regularities (in fact, we do know they have them), so that they can be represented in compressed form, and you can exploit that knowledge. But also, to actually represent that knowledge in bits of world-knowledge you’d need to represent the state of all the experiments that were performed and from which that knowledge was inferred and generalized. Though volume wise, that’s still less than the applications… unless you count each application also as a further validation of the model that updates your confidence in it, at which point by definition the bits of knowledge backing the model are always more than the bits of order you got out of it.
Ah, interesting. If I were going down that path, I’d probably aim to use a Landauer-style argument. Something like, “here’s a bound on mutual information between the policy and the whole world, including the agent itself”. And then a lock/password could give us a lot more optimization power over one particular part of the world, but not over the world as a whole.
… I’m not sure how to make something like that nontrivial, though. Problem is, the policy itself would then presumably be embedded in the world, so I(π,world) is just H(π).
Here’s my immediate thought on it: you define a single world bit string W, and A, O and Y are just designated subsections of it. You are able to know only the contents of O, and can set the contents of A (this feels like it’s reducing the entropy of the whole world btw, so you could also postulate that you can only do so by drawing free energy from some other region, your fuel F: for each bit of A you set deterministically, you need to randomize two of F, so that the overall entropy increases). After this, some kind of map W→f(W) is applied repeatedly, evolving the system until such time comes to check that the region Y is indeed as close as possible to your goal configuration G. I think at this point the properties of the result will depend on the properties of the map—is it a “lock” map like your suggested one (compare a region of A with O, and if they’re identical, clone the rest of A into Y, possibly using up F to keep the entropy increase positive?). Is it reversible, is it chaotic?
Yeah, not sure, I need to think about it. Reversibility (even acting as if these were qubits and not simple bits) might be the key here. In general I think there can’t be any hard rule against lock-like maps, because the real world allows building locks. But maybe there’s some rule about how if you define the map itself randomly enough, it probably won’t be a lock-map (for example, you could define a map as a series of operations on two bits writing to a third one op(i,j)→k; decide a region of your world for it, encode bit indices and operators as bit strings, and you can make the map’s program itself a part of the world, and then define what makes a map a lock-like map and how probable that occurrence is).
The new question is: what is the upper bound on bits of optimization gained from a bit of observation? What’s the best-case asymptotic scaling? The counterexample suggests it’s roughly exponential, i.e. one bit of observation can double the number of bits of optimization. On the other hand, it’s not just multiplicative, because our xor example at the top of this post showed a jump from 0 bits of optimization to 1 bit from observing 1 bit.
I think it depends on the size of the world model. Imagine an agent with a branch due to uncertainty between two world models. It can construct these models in parallel but doesn’t know which one is true. Every observation it makes has two interpretations. A single observation which conclusively determines which branch world model was correct I think could produce an arbitrarily large but resource bounded update.
Isn’t it unbounded?
In an absolute sense, yes, but I expect it can be bounded as a function of bits of optimization without observation. For instance, if we could only at-most double the number of bits of opt by observing one bit, then that would bound bit-gain as a function of bits of optimization without observation, even though it’s unbounded in an absolute sense.
Unless you’re seeing some stronger argument which I have not yet seen?
The scaling would also be unbounded, at least that would be my default assumption without solid proof otherwise.
In other words I don’t see any reason to assume there must be any hard cap, whether at 2x or 10x or 100x, etc...
Here are two intuitive arguments:
If we can’t observe O, we could always just guess a particular value of O and then do whatever’s optimal for that value. Then with probability P[O], we’ll be right, and our performance is lower bounded by P[O]*(whatever optimization pressure we’re able to apply if we guess correctly).
The log-number of different policies bounds the log-number of different outcome-distributions we can achieve. And observing one additional bit doubles the log-number of different policies.
It’s possible to construct a counterexample where there’s a step from guessing at random to perfect knowledge after an arbitrary number of observed bits; n-1 bits of evidence are worthless alone and the nth bit lets you perfectly predict the next bit and all future bits.
Consider for example shifting bits in one at a time into the input of a known hash function that’s been initialized with an unknown value (and known width) and I ask you to guess a specified bit from the output; in the idealized case, you know nothing about the output of the function until you learn the final bit in the input (all unknown bits have shifted out) b/c they’re perfectly mixed, and after that you’ll guess every future bit correctly.
Seems like the pathological cases can be arbitrarily messy.
I don’t think that matters, because knowing all but the last bit, I can simply take two actions—action assuming last bit is true, and action assuming it’s false.
Not sure I’m following the setup and notation quite close enough to argue that one way or the other, as far as the order we’re saying the agent receives evidence and has to commit to actions. Above I was considering the simplest case of 1 bit evidence in, 1 bit action out, repeat.
I pretty sure that could be extended to get that one small key/update that unlocks the whole puzzle sort of effect and have the model click all at once. As you say though, not sure that gets to the heart of the matter regarding the bound; it may show that no such bound exists on the margin, the last piece can be much more valuable on the margin than all the prior pieces of evidence, but not necessarily in a way that violates the proposed bound overall. Maybe we have to see that last piece as unlocking some bounded amount of value from your prior observations.