Plausibly, one could think that if a model, trained on the entirety of human output, should be able to decipher more hidden states—ones that are not obvious to us—but might be obvious in latent space. It could mean that models might be super good at augmenting our existing understanding of fields but might not create new ones from scratch.
Excited to see what you come up with!
Plausibly, one could think that if a model, trained on the entirety of human output, should be able to decipher more hidden states—ones that are not obvious to us—but might be obvious in latent space. It could mean that models might be super good at augmenting our existing understanding of fields but might not create new ones from scratch.