I don’t think that’s right. Iiuc this is a logical and, so the values would be in {0, 1} (as required, since tracr operates with Boolean attention). For a more extensive discussion of the original problem see appendix C.
Appendix C attempts to discard the softargmax, but it’s an integral part of my suggestion. If the inputs to softargmax take values {0,1000,2000}, the outputs will take only two values.
The uniform weights dictated by our selectors reflect an attention pattern in which ‘unselected’ pairs are all given strongly negative scores, while the selected pairs all have higher, similar, scores.
Addition of such attention patterns corresponds to anding of such selectors.
(The quote refers to the usage of binary attention patterns in general, so I’m not sure why you’re quoting it)
I obv agree that if you take the softmax over {0, 1000, 2000}, you will get 0 and 1 entries.
iiuc, the statement in the tracr paper is not that you can’t have attention patterns which implement this logical operation, but that you can’t have a single head implementing this attention pattern (without exponential blowup)
You’re right that accounting for the softmax could be used to get around the argument in the appendix. We’ll mention this when we update the paper.
The scheme as you described it relies on an always-on dummy token, which would conflict with our implementation of default values in aggregates: we fix the BOS token attention to 0.5, so at low softmax temp it’s attended to iff nothing else is; however with this scheme we’d want 0.5 to always round off to zero after softmax. This is plausibly surmountable but we put it out of scope for the pilot release since we didn’t seem to need this feature, whereas default values come up pretty often.
Also, while this construction will work for just and (with arbitrarily many conjuncts), I don’t think it works for arbitrary compositions using and and or. (Of course it would be better than no composition at all.)
In general, I expect there to be a number of potentially low-hanging improvements to be made to the compiler—many of them we deliberately omitted and are mentioned in the limitations section, and many we haven’t yet come up with. There’s tons of features one could add, each of which takes time to think about and increases overall complexity, so we had to be judicious which lines to pursue—and even then, we barely had time to get into superposition experiments. We currently aren’t prioritising further Tracr development until we see it used for research in practice, but I’d be excited to help anyone who’s interested in working with it or contributing.
I don’t think that’s right. Iiuc this is a logical and, so the values would be in {0, 1} (as required, since tracr operates with Boolean attention). For a more extensive discussion of the original problem see appendix C.
Appendix C attempts to discard the softargmax, but it’s an integral part of my suggestion. If the inputs to softargmax take values {0,1000,2000}, the outputs will take only two values.
From the RASP paper: https://arxiv.org/pdf/2106.06981
Addition of such attention patterns corresponds to anding of such selectors.
(The quote refers to the usage of binary attention patterns in general, so I’m not sure why you’re quoting it)
I obv agree that if you take the softmax over {0, 1000, 2000}, you will get 0 and 1 entries.
iiuc, the statement in the tracr paper is not that you can’t have attention patterns which implement this logical operation, but that you can’t have a single head implementing this attention pattern (without exponential blowup)
(To show my idea compatible with Boolean attention.)
I use a single head, and the ranks add up linearly.
You’re right that accounting for the softmax could be used to get around the argument in the appendix. We’ll mention this when we update the paper.
The scheme as you described it relies on an always-on dummy token, which would conflict with our implementation of default values in aggregates: we fix the BOS token attention to 0.5, so at low softmax temp it’s attended to iff nothing else is; however with this scheme we’d want 0.5 to always round off to zero after softmax. This is plausibly surmountable but we put it out of scope for the pilot release since we didn’t seem to need this feature, whereas default values come up pretty often.
Also, while this construction will work for just
and
(with arbitrarily many conjuncts), I don’t think it works for arbitrary compositions usingand
andor
. (Of course it would be better than no composition at all.)In general, I expect there to be a number of potentially low-hanging improvements to be made to the compiler—many of them we deliberately omitted and are mentioned in the limitations section, and many we haven’t yet come up with. There’s tons of features one could add, each of which takes time to think about and increases overall complexity, so we had to be judicious which lines to pursue—and even then, we barely had time to get into superposition experiments. We currently aren’t prioritising further Tracr development until we see it used for research in practice, but I’d be excited to help anyone who’s interested in working with it or contributing.
1.5 does conjunction without my dummy.
When at most one clause is true, 0.5 does disjunction instead.
Ah, yep, I think you’re right—it should be pretty easy to add support for
and
in selectors then.