I did not use your initialization scheme, since I was unaware of your paper at the time I was running those experiments. I will definitely try that soon!
Yeah, I can see how leaky topk and multi-topk are doing similar things. I wonder if leaky topk also gives a progressive code past the value of k used in training. That definitely seems worth looking into. Thanks for the suggestions!
Oh, yeah, looks like with p=2 this is equivalent to Hoyer-Square. Thanks for pointing that out; I didn’t know this had been studied previously.
And you’re right, that was a typo, and I’ve fixed it now. Thank you for mentioning that!