Linear Connectivity Reveals Generalization Strategies suggests that models trained on the same data may fall into different basins associated with different generalization strategies depending on the init. If this is true for LLMs as well, this could potentially be a big deal. I would very much like to know whether that’s the case, and if so, whether generalization basins are stable as models scale.
Excited to see people thinking about this! Importantly, there’s an entire ML literature out there to get evidence from and ways to [keep] study[ing] this empirically. Some examples of the existing literature (also see Path dependence in ML inductive biases and How likely is deceptive alignment?): Linear Connectivity Reveals Generalization Strategies—on fine-tuning path-dependance, The Grammar-Learning Trajectories of Neural Language Models (and many references in that thread), Let’s Agree to Agree: Neural Networks Share Classification Order on Real Datasets—on pre-training path-dependance. I can probably find many more references through my boorkmarks, if there’s an interest for this.
Linear Connectivity Reveals Generalization Strategies suggests that models trained on the same data may fall into different basins associated with different generalization strategies depending on the init. If this is true for LLMs as well, this could potentially be a big deal. I would very much like to know whether that’s the case, and if so, whether generalization basins are stable as models scale.