RNNs break the Markov property in the sense that they depend on more than just the previous element in the sequence they are modelling. But I don’t see why that would be relevant to ELK.
You’re right in that RNNs don’t have anything to do with ELK, but I came back to it because the Markov property was part of the lead up to saying that all parts of I are correlated.
So with your help, I have to change my reasoning to:
In the worst case our reporter needs to learn the function between highly correlated I and our target G.
Correct? Than I can update my first statement to
In the worst case, I is highly correlated to such point that no single part of I can be uniquely mapped to G, regardless of any ontological mismatch.
If I’m wrong, do let me know!
Updating my second line of thought
When I say that a strong prior is needed I mean the same thing that Paul means when he writes: “We suspect you can’t solve ELK just by getting better data—you probably need to ‘open up the black box’ and include some term in the loss that depends on the structure of your model and not merely its behaviour.”. Which is a very broad class of strategies.
Ah yes, I understand now. This relates to my second line of thought. I reasoned that the reporter could learn any causal graph. I said we had no way of knowing which.
Because of your help, I need to update that to:
We have no way of knowing which causal graph was learned if we used a black box as our reporter.
Which was in the opening text all along...
But this leads me to the question:
If I cannot reason about internal state I, can I have a prior belief about I? And if I have no prior belief about I, can I have a prior belief about G as a function of I?
My analogy would be: If I don’t know where I am, how can I reason about getting home?
And -if you’ll humor me- my follow up statement would be:
If I can form no prior belief about G as a function of I and this function has to have some non-small complexity, then no option remains but a priorless black box.
Again, If I’m wrong: let me know! I’m learning a lot already.
Irrelevant side note: I saw you using the term computational graph. I chose the term causal graph, because I liked it being closer to the ground truth. Besides, a causal graph learned by some algorithm need not be exactly the same as it’s computational graph. And then I chose such simple examples that they were equal again. Stupid me.
As before I am behind the curve. Above I concluded saying that I can form no prior belief about G as a function of I. I cannot, but we can learn a function to create our prior. Paul Christiano already wrote an article about learning the prior (https://www.lesswrong.com/posts/SL9mKhgdmDKXmxwE4/learning-the-prior).
So in conclusion, in the worst case no single function mapping I to G exists, as there are multiple reducing down to either camp translator or camp human-imitator. Without context we can form no strong prior due to the complexity of A and I, but as Paul described in his article we can learn a prior from for example in our case the dataset containing G as a function of A.
I’ll add a tl;dr in my first post to shorten the read about how I slowly caught up to everyone else. Corrections are of course still welcome!
Updating my first line of thought
You’re right in that RNNs don’t have anything to do with ELK, but I came back to it because the Markov property was part of the lead up to saying that all parts of I are correlated.
So with your help, I have to change my reasoning to:
Correct? Than I can update my first statement to
If I’m wrong, do let me know!
Updating my second line of thought
Ah yes, I understand now. This relates to my second line of thought. I reasoned that the reporter could learn any causal graph. I said we had no way of knowing which.
Because of your help, I need to update that to:
Which was in the opening text all along...
But this leads me to the question:
My analogy would be: If I don’t know where I am, how can I reason about getting home?
And -if you’ll humor me- my follow up statement would be:
Again, If I’m wrong: let me know! I’m learning a lot already.
Irrelevant side note: I saw you using the term computational graph. I chose the term causal graph, because I liked it being closer to the ground truth. Besides, a causal graph learned by some algorithm need not be exactly the same as it’s computational graph. And then I chose such simple examples that they were equal again. Stupid me.
As before I am behind the curve. Above I concluded saying that I can form no prior belief about G as a function of I. I cannot, but we can learn a function to create our prior. Paul Christiano already wrote an article about learning the prior (https://www.lesswrong.com/posts/SL9mKhgdmDKXmxwE4/learning-the-prior).
So in conclusion, in the worst case no single function mapping I to G exists, as there are multiple reducing down to either camp translator or camp human-imitator. Without context we can form no strong prior due to the complexity of A and I, but as Paul described in his article we can learn a prior from for example in our case the dataset containing G as a function of A.
I’ll add a tl;dr in my first post to shorten the read about how I slowly caught up to everyone else. Corrections are of course still welcome!