Looking at your code I see you still add an L1 penalty to the loss, is this still necessary? In my own experiments I’ve noticed that top-k is able to achieve sparsity on it’s own without the need for L1.
Jose Sepulveda
Karma: 0
Looking at your code I see you still add an L1 penalty to the loss, is this still necessary? In my own experiments I’ve noticed that top-k is able to achieve sparsity on it’s own without the need for L1.
Oh I see that, thanks! :) Super interesting work. I’m testing it’s application to recommender systems.