One important thing to note here is that the LTH paper doesn’t demonstrate that SGD “finds” a ticket: just that the subnetwork you get by training and pruning could be trained alone in isolation to higher accuracy. That doesn’t mean that the weights in the original training are the same when the network is trained in isolation!
So in particular I basically disagree with the opening summary of the content of the “lottery ticket hypothesis”. I think a better summary is found in the abstract of the original paper:
dense, randomly-initialized, feed-forward networks contain subnetworks (winning tickets) that—when trained in isolation—reach test accuracy comparable to the original network in a similar number of iterations
Yup, I agree that that quote says something which is probably true, given current evidence. I don’t think “picking a winning lottery ticket” is a good way analogy for what that implies, though; see this comment.
Yup, I agree that that quote says something which is probably true, given current evidence.
I don’t know what the referent of ‘that quote’ is. If you mean the passage I quoted from the original lottery ticket hypothesis (“LTH”) paper, then I highly recommend reading a follow-up paper which describes how and why it’s wrong for large networks. The abstract of the paper I’m citing here:
We study whether a neural network optimizes to the same, linearly connected minimum under different samples of SGD noise (e.g., random data order and augmentation). We find that standard vision models become stable to SGD noise in this way early in training. From then on, the outcome of optimization is determined to a linearly connected region. We use this technique to study iterative magnitude pruning (IMP), the procedure used by work on the lottery ticket hypothesis to identify subnetworks that could have trained in isolation to full accuracy. We find that these subnetworks only reach full accuracy when they are stable to SGD noise, which either occurs at initialization for small-scale settings (MNIST) or early in training for large-scale settings (ResNet-50 and Inception-v3 on ImageNet).
I don’t think “picking a winning lottery ticket” is a good way analogy for what that implies
Again, assuming “that” refers to the claim in the original LTH paper, I also don’t think it’s a good analogy. But by default I think that claim is what “the lottery ticket hypothesis” refers to, given that it’s a widely cited paper that has spawned a large number of follow-up works.
One important thing to note here is that the LTH paper doesn’t demonstrate that SGD “finds” a ticket: just that the subnetwork you get by training and pruning could be trained alone in isolation to higher accuracy. That doesn’t mean that the weights in the original training are the same when the network is trained in isolation!
So in particular I basically disagree with the opening summary of the content of the “lottery ticket hypothesis”. I think a better summary is found in the abstract of the original paper:
Yup, I agree that that quote says something which is probably true, given current evidence. I don’t think “picking a winning lottery ticket” is a good way analogy for what that implies, though; see this comment.
I don’t know what the referent of ‘that quote’ is. If you mean the passage I quoted from the original lottery ticket hypothesis (“LTH”) paper, then I highly recommend reading a follow-up paper which describes how and why it’s wrong for large networks. The abstract of the paper I’m citing here:
Again, assuming “that” refers to the claim in the original LTH paper, I also don’t think it’s a good analogy. But by default I think that claim is what “the lottery ticket hypothesis” refers to, given that it’s a widely cited paper that has spawned a large number of follow-up works.
Oh that’s definitely interesting. Thanks for the link.