I’m not sure what you mean by “K-means clustering baseline (with K=1)”. I would think the K in K-means stands for the number of means you use, so with K=1, you’re just taking the mean direction of the weights. I would expect this to explain maybe 50% of the variance (or less), not 90% of the variance.
Thanks for pointing this out! I confused nomenclature, will fix!
Edit: Fixed now. I confused
the number of clusters (“K”) / dictionary size
the number of latents (“L_0” or k in top-k SAEs). Some clustering methods allow you to assign multiple clusters to one point, so effectively you get a “L_0>1″ but normal KMeans is only 1 cluster per point. I confused the K of KMeans and the k (aka L_0) of top-k SAEs.
Thanks for pointing this out! I confused nomenclature, will fix!
Edit: Fixed now. I confused
the number of clusters (“K”) / dictionary size
the number of latents (“L_0” or k in top-k SAEs). Some clustering methods allow you to assign multiple clusters to one point, so effectively you get a “L_0>1″ but normal KMeans is only 1 cluster per point. I confused the K of KMeans and the k (aka L_0) of top-k SAEs.