tl;dr as of 18/2/2022 The goal is to educate me and maybe others. I make some statements, you tell me how wrong I am (please).
After input from P. (many thanks) and an article by Paul Christiano this statement stands yet uncorrected:
In the worst case, the internal state of the predictor is highly correlated within itself and multiple mappings with zero loss from the internal state to the desired extraction of information exist. The only solution is to work with some prior belief about how the internal state maps to the desired information. But as by design of the contest, this is not possible as (in the worst case) a human cannot interpret the internal state nor can he interpret complex actions (and so cannot reason about it and/or form a prior belief). The solution to this second problem is to learn a prior from a smaller human-readable dataset, for example simple information as a function of simple actions, and apply it to (or force it upon) our reporter (as described by the mentioned article).
To my eyes this implies that there is a counterexample to all of the following types of proposal: 1) Datasets including only actions, predictions, internal states and desired information, be they large or small, created by smart or stupid humans (I mean the theory, not the authors of the proposal), with or without extra information from within the vault. 2) “Simple” designs for the reporter using some prior belief about how the internal state should map. 3) Having a strong prior belief (as the author) about how the reporter will map, using the above two points.
And to my eyes this leaves room only to proposals that find out how to: 1) Distinguish reporters between human-imitators and translators without creating a simple reporter 2) Machine learn how to transcribe a prior belief learned from a simple dataset to a larger complex dataset, without creating another black box AI with all of the faults mentioned above.
Please, feel free to correct me and thank you in advance if you do!
Hi all,
I’m just a passerby. A few days ago Robert Miles and his wonderful YouTube channel pointed me in the direction of this contest. It’s good to know that I have no qualifications for anything close to this field, but it got me thinking. In all honesty, I probably should not have entered anything and waste anyone’s time. But hey, there was a deadline and a prize, so I did.
Because my proposal will probably end in the trash, I’m set on learning as much as I can from you smart people. Get my prize in knowledge as it were (the bigger price, I think).
My question My intuition is that there can be no such setup that guarantees a correct reporter. My question to you is: Is my logic sound? If not, where do I err?
Setup Let’s say the ‘real world’ causal graph is (using → for directed graphs):
A → G
Where A is some actions and G is some small detail we care about along the way.
And our super AI looks like this (using :> for input/output of functions):
A :> [I] :> S
Where A is the actions as before, I is this complex opaque inner state and S is the predicted state after the actions.
And our reporter looks like this:
I :> G
Where I is the internal state of the bigger AI again and G is that small piece of information we’d like to elicit from the inner state. We train this reporter on a dataset containing P(I|A) and a true P(G|A) until we get zero loss.
Now we want to know if our reporter (I :> G) generalizes well. In other words we want to know if it has learned the correct mapping between some part of I and G.
My thinking, the first way Once, some time ago, our perfect AI was trained to learn the joint distribution P(A,S). It learned that S is a non-linear, complex function of A using some complex, layered inner state I. If we think of I as a set of parts P, then it has many parts {p1,p2,p3 … pn}. And we can think of our AI as some graph:
A → p1 → p2 …pn → S
And they have the Markov property. So P(pn | p1..pn-1) = P(pn | pn-1). In English: each part carries the information of the layers before it else P(S | A) would not be equal to P (S | pn). So when we set our reporter to learn the function between I and G it sees some highly correlated inputs in a joint distribution P(p1,p2,p3...pn) where each p carries information of the others. From that input it has to construct it’s own internal causal graph. What we want our reporter to learn is G as a function of P(I |A). But what graph should it construct?
A → I → G, which could be:
A → p1 → G, or A → p2 → G, or A → p3 → G ... A → pn → G, or any variation of parts.
But let’s say there was some way to conclude to only one internal graph using only one part (let’s say p1), what would it require? It would require that part p1 not be correlated with the other p’s. It would require that p1 does not carry any information other than about A. But, if p1 did not carry any information or correlation from the other p’s, the Markov property would be broken and our perfect AI would not be perfect.
What I’m saying is that there can be no single graph learned by the reporter, because if it could it would require the super AI to be no super AI.
My thinking, the second way Let’s elaborate on this graph-thing. I use a causal graph as a stand in for a learned function. I think that it’s similar enough. For example, let’s say our output is a function of the input, so:
let output = AI (input)
And let’s say this AI has some layers, h1 and h2 such that:
let h1 = f(input) let h2 = g(h1) let o = h(h2)
That the function AI can be by composition (using F# notation):
let AI = h1 >> h2 >> o
That looks a lot like a(causal) graph:
input → h1 → h2 → output
Now say we create and train our reporter to zero loss. And let’s assume it finds some way to correlate some part of the internal state (in our small example above, let’s say: h1) to the value we want to know G. For this it gets to train on the joint (and correlated) distribution P(h1,h2) with target G.
let G = reporter (h1,h2)
and it learns the internal graph (I’ll skip writing the functions):
h2 → h1 → G
That would be the best case. A translator. But equally possible would be
h1 → h2 → G
or even worse would be if the reporter reconstructed (as described in the report) the output of the super AI, creating a human simulator.
h1 → h2 → S → G
My point is, the input variables into the reporter are correlated and other values can be reconstructed. So as by the rule that from highly correlated variables no single causal graph can be concluded without outside knowledge. Alle graph-versions can map the AI internal state to our hope-to-be-elicited information, but we have no way to know what graph was internalized. Unless we make a reporter-reporter. But that would require reporters ad infinitum.
Conclusion Reasoning along the above two methods I saw no solution to the problem of the reporter. I’m probably wrong. But I’d like to know why if I can. Thanks in advance!
The Markov property doesn’t imply that we can’t determine what variable we care about using some kind of “correlation”. Some part of the information in some node in the chain might disappear when computing the next node, so we might be able to distinguish it from its successors. And it might also have been gained when randomly computing its value from the previous node, so it might be possible to distinguish it from its predecessors.
In the worst case scenario where all variables are in fact correlated to G what we need to do is to use a strong prior so that it prefers the correct computational graph over the wrong ones. This might be hard but it isn’t impossible.
But you can also try to create a dataset that makes the problem easier to solve, or train a wrong reporter and only reply when the predictions made when using each node are the same so we don’t care what node it actually uses (as long as it can use the nodes properly, instead of computing other node and using it to get the answer, or something like that).
I’ll concede that the markov property does not make all nodes indistinguishable. I’ll go further and say that not all algorithm’s have to have the markov property. A google-search learned me that an RNN breaks the markov property. But then again, we are dealing with the worst-case-game, so with our luck, it’ll probably be some highly correlated thing.
You suggest using some strong prior belief. I assume you mean a prior belief about I or about I → G? I thought, but correct me if I’m wrong, that the opaqueness of the internal state of the complex AI would mean that we can have no meaningfull prior belief about the internal state. So that would rule out a prior belief about (the hyperparameters of) our reporter I → G. Or am I wrong?
We can however have a strong idea about A → G, as per example of the ‘human operator’ and use that as our training data. But that falls with the counterexample given in the report, when the distribution shifts from simple to complex.
RNNs break the Markov property in the sense that they depend on more than just the previous element in the sequence they are modelling. But I don’t see why that would be relevant to ELK.
When I say that a strong prior is needed I mean the same thing that Paul means when he writes: “We suspect you can’t solve ELK just by getting better data—you probably need to ‘open up the black box’ and include some term in the loss that depends on the structure of your model and not merely its behaviour.”. Which is a very broad class of strategies.
I also don’t understand what you mean by having a strong idea about A->G, we of course have pairs of [A, G] in our training data but what we need to know is how to compute G from A given these pairs.
RNNs break the Markov property in the sense that they depend on more than just the previous element in the sequence they are modelling. But I don’t see why that would be relevant to ELK.
You’re right in that RNNs don’t have anything to do with ELK, but I came back to it because the Markov property was part of the lead up to saying that all parts of I are correlated.
So with your help, I have to change my reasoning to:
In the worst case our reporter needs to learn the function between highly correlated I and our target G.
Correct? Than I can update my first statement to
In the worst case, I is highly correlated to such point that no single part of I can be uniquely mapped to G, regardless of any ontological mismatch.
If I’m wrong, do let me know!
Updating my second line of thought
When I say that a strong prior is needed I mean the same thing that Paul means when he writes: “We suspect you can’t solve ELK just by getting better data—you probably need to ‘open up the black box’ and include some term in the loss that depends on the structure of your model and not merely its behaviour.”. Which is a very broad class of strategies.
Ah yes, I understand now. This relates to my second line of thought. I reasoned that the reporter could learn any causal graph. I said we had no way of knowing which.
Because of your help, I need to update that to:
We have no way of knowing which causal graph was learned if we used a black box as our reporter.
Which was in the opening text all along...
But this leads me to the question:
If I cannot reason about internal state I, can I have a prior belief about I? And if I have no prior belief about I, can I have a prior belief about G as a function of I?
My analogy would be: If I don’t know where I am, how can I reason about getting home?
And -if you’ll humor me- my follow up statement would be:
If I can form no prior belief about G as a function of I and this function has to have some non-small complexity, then no option remains but a priorless black box.
Again, If I’m wrong: let me know! I’m learning a lot already.
Irrelevant side note: I saw you using the term computational graph. I chose the term causal graph, because I liked it being closer to the ground truth. Besides, a causal graph learned by some algorithm need not be exactly the same as it’s computational graph. And then I chose such simple examples that they were equal again. Stupid me.
As before I am behind the curve. Above I concluded saying that I can form no prior belief about G as a function of I. I cannot, but we can learn a function to create our prior. Paul Christiano already wrote an article about learning the prior (https://www.lesswrong.com/posts/SL9mKhgdmDKXmxwE4/learning-the-prior).
So in conclusion, in the worst case no single function mapping I to G exists, as there are multiple reducing down to either camp translator or camp human-imitator. Without context we can form no strong prior due to the complexity of A and I, but as Paul described in his article we can learn a prior from for example in our case the dataset containing G as a function of A.
I’ll add a tl;dr in my first post to shorten the read about how I slowly caught up to everyone else. Corrections are of course still welcome!
tl;dr as of 18/2/2022
The goal is to educate me and maybe others. I make some statements, you tell me how wrong I am (please).
After input from P. (many thanks) and an article by Paul Christiano this statement stands yet uncorrected:
In the worst case, the internal state of the predictor is highly correlated within itself and multiple mappings with zero loss from the internal state to the desired extraction of information exist. The only solution is to work with some prior belief about how the internal state maps to the desired information. But as by design of the contest, this is not possible as (in the worst case) a human cannot interpret the internal state nor can he interpret complex actions (and so cannot reason about it and/or form a prior belief). The solution to this second problem is to learn a prior from a smaller human-readable dataset, for example simple information as a function of simple actions, and apply it to (or force it upon) our reporter (as described by the mentioned article).
To my eyes this implies that there is a counterexample to all of the following types of proposal:
1) Datasets including only actions, predictions, internal states and desired information, be they large or small, created by smart or stupid humans (I mean the theory, not the authors of the proposal), with or without extra information from within the vault.
2) “Simple” designs for the reporter using some prior belief about how the internal state should map.
3) Having a strong prior belief (as the author) about how the reporter will map, using the above two points.
And to my eyes this leaves room only to proposals that find out how to:
1) Distinguish reporters between human-imitators and translators without creating a simple reporter
2) Machine learn how to transcribe a prior belief learned from a simple dataset to a larger complex dataset, without creating another black box AI with all of the faults mentioned above.
Please, feel free to correct me and thank you in advance if you do!
Hi all,
I’m just a passerby. A few days ago Robert Miles and his wonderful YouTube channel pointed me in the direction of this contest. It’s good to know that I have no qualifications for anything close to this field, but it got me thinking. In all honesty, I probably should not have entered anything and waste anyone’s time. But hey, there was a deadline and a prize, so I did.
Because my proposal will probably end in the trash, I’m set on learning as much as I can from you smart people. Get my prize in knowledge as it were (the bigger price, I think).
My question
My intuition is that there can be no such setup that guarantees a correct reporter. My question to you is: Is my logic sound? If not, where do I err?
Setup
Let’s say the ‘real world’ causal graph is (using → for directed graphs):
A → G
Where A is some actions and G is some small detail we care about along the way.
And our super AI looks like this (using :> for input/output of functions):
A :> [I] :> S
Where A is the actions as before, I is this complex opaque inner state and S is the predicted state after the actions.
And our reporter looks like this:
I :> G
Where I is the internal state of the bigger AI again and G is that small piece of information we’d like to elicit from the inner state. We train this reporter on a dataset containing P(I|A) and a true P(G|A) until we get zero loss.
Now we want to know if our reporter (I :> G) generalizes well. In other words we want to know if it has learned the correct mapping between some part of I and G.
My thinking, the first way
Once, some time ago, our perfect AI was trained to learn the joint distribution P(A,S). It learned that S is a non-linear, complex function of A using some complex, layered inner state I.
If we think of I as a set of parts P, then it has many parts {p1,p2,p3 … pn}. And we can think of our AI as some graph:
A → p1 → p2 …pn → S
And they have the Markov property. So P(pn | p1..pn-1) = P(pn | pn-1). In English: each part carries the information of the layers before it else P(S | A) would not be equal to P (S | pn).
So when we set our reporter to learn the function between I and G it sees some highly correlated inputs in a joint distribution P(p1,p2,p3...pn) where each p carries information of the others.
From that input it has to construct it’s own internal causal graph. What we want our reporter to learn is G as a function of P(I |A). But what graph should it construct?
A → I → G, which could be:
A → p1 → G, or
A → p2 → G, or
A → p3 → G
...
A → pn → G, or any variation of parts.
But let’s say there was some way to conclude to only one internal graph using only one part (let’s say p1), what would it require? It would require that part p1 not be correlated with the other p’s. It would require that p1 does not carry any information other than about A. But, if p1 did not carry any information or correlation from the other p’s, the Markov property would be broken and our perfect AI would not be perfect.
What I’m saying is that there can be no single graph learned by the reporter, because if it could it would require the super AI to be no super AI.
My thinking, the second way
Let’s elaborate on this graph-thing. I use a causal graph as a stand in for a learned function. I think that it’s similar enough. For example, let’s say our output is a function of the input, so:
let output = AI (input)
And let’s say this AI has some layers, h1 and h2 such that:
let h1 = f(input)
let h2 = g(h1)
let o = h(h2)
That the function AI can be by composition (using F# notation):
let AI = h1 >> h2 >> o
That looks a lot like a(causal) graph:
input → h1 → h2 → output
Now say we create and train our reporter to zero loss. And let’s assume it finds some way to correlate some part of the internal state (in our small example above, let’s say: h1) to the value we want to know G. For this it gets to train on the joint (and correlated) distribution P(h1,h2) with target G.
let G = reporter (h1,h2)
and it learns the internal graph (I’ll skip writing the functions):
h2 → h1 → G
That would be the best case. A translator.
But equally possible would be
h1 → h2 → G
or even worse would be if the reporter reconstructed (as described in the report) the output of the super AI, creating a human simulator.
h1 → h2 → S → G
My point is, the input variables into the reporter are correlated and other values can be reconstructed. So as by the rule that from highly correlated variables no single causal graph can be concluded without outside knowledge. Alle graph-versions can map the AI internal state to our hope-to-be-elicited information, but we have no way to know what graph was internalized. Unless we make a reporter-reporter. But that would require reporters ad infinitum.
Conclusion
Reasoning along the above two methods I saw no solution to the problem of the reporter. I’m probably wrong. But I’d like to know why if I can. Thanks in advance!
Thomas
The Markov property doesn’t imply that we can’t determine what variable we care about using some kind of “correlation”. Some part of the information in some node in the chain might disappear when computing the next node, so we might be able to distinguish it from its successors. And it might also have been gained when randomly computing its value from the previous node, so it might be possible to distinguish it from its predecessors.
In the worst case scenario where all variables are in fact correlated to G what we need to do is to use a strong prior so that it prefers the correct computational graph over the wrong ones. This might be hard but it isn’t impossible.
But you can also try to create a dataset that makes the problem easier to solve, or train a wrong reporter and only reply when the predictions made when using each node are the same so we don’t care what node it actually uses (as long as it can use the nodes properly, instead of computing other node and using it to get the answer, or something like that).
Thank you very much for your reply!
I’ll concede that the markov property does not make all nodes indistinguishable. I’ll go further and say that not all algorithm’s have to have the markov property. A google-search learned me that an RNN breaks the markov property. But then again, we are dealing with the worst-case-game, so with our luck, it’ll probably be some highly correlated thing.
You suggest using some strong prior belief. I assume you mean a prior belief about I or about I → G? I thought, but correct me if I’m wrong, that the opaqueness of the internal state of the complex AI would mean that we can have no meaningfull prior belief about the internal state. So that would rule out a prior belief about (the hyperparameters of) our reporter I → G. Or am I wrong?
We can however have a strong idea about A → G, as per example of the ‘human operator’ and use that as our training data. But that falls with the counterexample given in the report, when the distribution shifts from simple to complex.
RNNs break the Markov property in the sense that they depend on more than just the previous element in the sequence they are modelling. But I don’t see why that would be relevant to ELK.
When I say that a strong prior is needed I mean the same thing that Paul means when he writes: “We suspect you can’t solve ELK just by getting better data—you probably need to ‘open up the black box’ and include some term in the loss that depends on the structure of your model and not merely its behaviour.”. Which is a very broad class of strategies.
I also don’t understand what you mean by having a strong idea about A->G, we of course have pairs of [A, G] in our training data but what we need to know is how to compute G from A given these pairs.
Updating my first line of thought
You’re right in that RNNs don’t have anything to do with ELK, but I came back to it because the Markov property was part of the lead up to saying that all parts of I are correlated.
So with your help, I have to change my reasoning to:
Correct? Than I can update my first statement to
If I’m wrong, do let me know!
Updating my second line of thought
Ah yes, I understand now. This relates to my second line of thought. I reasoned that the reporter could learn any causal graph. I said we had no way of knowing which.
Because of your help, I need to update that to:
Which was in the opening text all along...
But this leads me to the question:
My analogy would be: If I don’t know where I am, how can I reason about getting home?
And -if you’ll humor me- my follow up statement would be:
Again, If I’m wrong: let me know! I’m learning a lot already.
Irrelevant side note: I saw you using the term computational graph. I chose the term causal graph, because I liked it being closer to the ground truth. Besides, a causal graph learned by some algorithm need not be exactly the same as it’s computational graph. And then I chose such simple examples that they were equal again. Stupid me.
As before I am behind the curve. Above I concluded saying that I can form no prior belief about G as a function of I. I cannot, but we can learn a function to create our prior. Paul Christiano already wrote an article about learning the prior (https://www.lesswrong.com/posts/SL9mKhgdmDKXmxwE4/learning-the-prior).
So in conclusion, in the worst case no single function mapping I to G exists, as there are multiple reducing down to either camp translator or camp human-imitator. Without context we can form no strong prior due to the complexity of A and I, but as Paul described in his article we can learn a prior from for example in our case the dataset containing G as a function of A.
I’ll add a tl;dr in my first post to shorten the read about how I slowly caught up to everyone else. Corrections are of course still welcome!