Meta-q: Are you primarily asking for better assumptions or that they be made more explicit?
I would be most interested in an explanation for the assumption that is grounded in the distribution you are trying to approximate. It’s hard to tell which parts of the assumptions are bad without knowing (which properties of) the distribution it’s trying to approximate or why you think that the true distribution has property XYZ.
Re MLPs: I agree that we ideally want something general but it looks like your post is evidence that something about the assumptions is wrong and doesn’t transfer to MLPs, breaking the method. So we probably want to understand better what about the assumptions don’t hold there. If you have a toy model that better represents the true dist then you can confidently iterate on methods via the toy model.
Undertrained autoencoders
I was actually thinking of the LM when writing this but yeah the autoencoder itself might also be a problem. Great to hear you’re thinking about that.
I would be most interested in an explanation for the assumption that is grounded in the distribution you are trying to approximate. It’s hard to tell which parts of the assumptions are bad without knowing (which properties of) the distribution it’s trying to approximate or why you think that the true distribution has property XYZ.
Re MLPs: I agree that we ideally want something general but it looks like your post is evidence that something about the assumptions is wrong and doesn’t transfer to MLPs, breaking the method. So we probably want to understand better what about the assumptions don’t hold there. If you have a toy model that better represents the true dist then you can confidently iterate on methods via the toy model.
I was actually thinking of the LM when writing this but yeah the autoencoder itself might also be a problem. Great to hear you’re thinking about that.