If the symmetry only holds for a particular solution in some region of the loss landscape rather than being globally baked into the architecture, the γ value will still be conserved under gradient descent so long as we’re inside that region.
...
One could maybe hold out hope that the conserved quantities/coordinates associated with degrees of freedom in a particular solution are sometimes more interesting, but I doubt it. For e.g. the degrees of freedom we talk about here, those invariants seem similar to the ones in the ReLU rescaling example above.
Dead neurons are a special case of 3.1.1 (low-dimensional activations) in that paper, bypassed neurons are a special case of 3.2 (synchronised non-linearities). Hidden polytopes are a mix 3.2.2 (Jacobians spanning a low-dimensional subspace) and 3.1.1 I think. I’m a bit unsure which one because I’m not clear on what weight direction you’re imagining varying when you talk about “moving the vertex”. Since the first derivative of the function you’re approximating doesn’t actually change at this point, there’s multiple ways you could do this.
That’s what I meant by
Dead neurons are a special case of 3.1.1 (low-dimensional activations) in that paper, bypassed neurons are a special case of 3.2 (synchronised non-linearities). Hidden polytopes are a mix 3.2.2 (Jacobians spanning a low-dimensional subspace) and 3.1.1 I think. I’m a bit unsure which one because I’m not clear on what weight direction you’re imagining varying when you talk about “moving the vertex”. Since the first derivative of the function you’re approximating doesn’t actually change at this point, there’s multiple ways you could do this.