With SSL pre-trained foundation models, the interesting thing is that the embeddings computed by them are useful for many purposes, while the models are not trained with any particular purpose in mind. Their role is analogous to beliefs, the map of the world, epistemic side of agency, convergently useful choice of representation/​compression that by its epistemic nature is adequate for many applied purposes.
With SSL pre-trained foundation models, the interesting thing is that the embeddings computed by them are useful for many purposes, while the models are not trained with any particular purpose in mind. Their role is analogous to beliefs, the map of the world, epistemic side of agency, convergently useful choice of representation/​compression that by its epistemic nature is adequate for many applied purposes.