I do agree that looking at WO alone seems a bit misguided (unless we’re normalizing by looking at cosine similarity instead of dot product). However, the extent to which this is true is a bit unclear. Here are a few considerations:
At first blush, the thing you said is exactly right; scaling Win up and scale WO down will leave the implemented function unchanged.
However, this’ll affect the L2 regularization penalty. All else equal, we’d expect to see ∥Win∥=∥WO∥, since that minimizes the regularization penalty.
However, this is all complicated by the fact that you can also alternatively scale the LayerNorm’s gain parameter, which (I think) isn’t regularized.
Lastly, I believe GPT2 uses GELU, not ReLU? This is significant, since it no longer allows you to scale Win and WO without changing the implemented function.
I do agree that looking at WO alone seems a bit misguided (unless we’re normalizing by looking at cosine similarity instead of dot product). However, the extent to which this is true is a bit unclear. Here are a few considerations:
At first blush, the thing you said is exactly right; scaling Win up and scale WO down will leave the implemented function unchanged.
However, this’ll affect the L2 regularization penalty. All else equal, we’d expect to see ∥Win∥=∥WO∥, since that minimizes the regularization penalty.
However, this is all complicated by the fact that you can also alternatively scale the LayerNorm’s gain parameter, which (I think) isn’t regularized.
Lastly, I believe GPT2 uses GELU, not ReLU? This is significant, since it no longer allows you to scale Win and WO without changing the implemented function.