Using the word “importance” I think is misleading. Or, makes it harder to reason about the connection between this toy scenario and real text data. In real comedy/drama, there is patterns in the data to let me/the model deduce it is comedy or drama and hence allow me to focus on the conditionally important features.
Phrasing the task as follows helps me: You will be given 20 random numbers x1 to x20. I want you to find projections that can recover x1 to x20. Half the time I will ignore your answers from x1 to x10 and the other half the time x11 to x20. It’s totally random which half of the numbers I will ignore. xi and x_{10+i} get the same reward, and reward decreases for bigger i. Now, I find it easier to understand the model: the “obvious” strategy is to make sure I can reproduce x1 and x11, then x2 and x12, and so on, putting little weight on x10 and x20. Alternatively, this is equivalent to having fixed importance of (0.7, 0.49,...,0.7,0.49,...) without any conditioning.
Follow up Id be interested in is if the conditional importance was deducible from the data. E.g. x is a “comedy” if x1 + … + x20 > 0. Or if x1>0. With same architecture, I’d predict getting the same results though...? Not sure how the model could make use of this pattern.
And contrary to Charlie, I personally found the experiment crucial to understanding the informal argument. Shows how different ppl think!
I used the term “importance” since this was the term used in Anthropic’s original paper. I agree that (unlike in a real model) my toy scenario doesn’t contain sufficient information to deduce the context from the input data.
I like your phrasing of the task—it does a great job of concisely highlighting the ‘Mathematical Intuition for why Conditional Importance “doesn’t matter”’
Interesting that the experiment was helpful for you!
Vague thoughts/intuitions:
Using the word “importance” I think is misleading. Or, makes it harder to reason about the connection between this toy scenario and real text data. In real comedy/drama, there is patterns in the data to let me/the model deduce it is comedy or drama and hence allow me to focus on the conditionally important features.
Phrasing the task as follows helps me: You will be given 20 random numbers x1 to x20. I want you to find projections that can recover x1 to x20. Half the time I will ignore your answers from x1 to x10 and the other half the time x11 to x20. It’s totally random which half of the numbers I will ignore. xi and x_{10+i} get the same reward, and reward decreases for bigger i. Now, I find it easier to understand the model: the “obvious” strategy is to make sure I can reproduce x1 and x11, then x2 and x12, and so on, putting little weight on x10 and x20. Alternatively, this is equivalent to having fixed importance of (0.7, 0.49,...,0.7,0.49,...) without any conditioning.
Follow up Id be interested in is if the conditional importance was deducible from the data. E.g. x is a “comedy” if x1 + … + x20 > 0. Or if x1>0. With same architecture, I’d predict getting the same results though...? Not sure how the model could make use of this pattern.
And contrary to Charlie, I personally found the experiment crucial to understanding the informal argument. Shows how different ppl think!
Thanks for the thoughts --
I used the term “importance” since this was the term used in Anthropic’s original paper. I agree that (unlike in a real model) my toy scenario doesn’t contain sufficient information to deduce the context from the input data.
I like your phrasing of the task—it does a great job of concisely highlighting the ‘Mathematical Intuition for why Conditional Importance “doesn’t matter”’
Interesting that the experiment was helpful for you!