if you have a more detailed grasp on how exactly self-attention is close to a gradient descent step please do let me know, i’m having a hard time making sense of the details of these papers
Note that if computing an optimization step reduces the loss, the training process will reinforce it, even if other layers aren’t doing similar steps, so this is another reason to expect more explicit optimizers.
Basically, self attention is a function of certain matrices, something like this:
ej←ej+∑hPhWh,V∑ieh,i⊗eh,iWTh,KWh,Qej
Which looks really messy when you put it like this but is pretty natural in context.
If you can get the big messy looking term to approximate a gradient descent step for a given loss function, then you’re golden.
In appendix A.1., they show the matrices that yield this gradient descent step. They are pretty simple, and probably an easy point of attraction to find.
All of this reasoning is pretty vague, and without the experimental evidence it wouldn’t be nearly good enough. So there’s definitely more to understand here. But given the experimental evidence I think this is the right story about what’s going on.
if you have a more detailed grasp on how exactly self-attention is close to a gradient descent step please do let me know, i’m having a hard time making sense of the details of these papers
Note that if computing an optimization step reduces the loss, the training process will reinforce it, even if other layers aren’t doing similar steps, so this is another reason to expect more explicit optimizers.
Basically, self attention is a function of certain matrices, something like this:
ej←ej+∑hPhWh,V∑ieh,i⊗eh,iWTh,KWh,Qej
Which looks really messy when you put it like this but is pretty natural in context.
If you can get the big messy looking term to approximate a gradient descent step for a given loss function, then you’re golden.
In appendix A.1., they show the matrices that yield this gradient descent step. They are pretty simple, and probably an easy point of attraction to find.
All of this reasoning is pretty vague, and without the experimental evidence it wouldn’t be nearly good enough. So there’s definitely more to understand here. But given the experimental evidence I think this is the right story about what’s going on.