I think many people have the misapprehension that one can just meditate on abstract properties of “advanced systems” and come to good conclusions about unknown results “in the limit of ML training”, without much in the way of technical knowledge about actual machine learning results or even a track record in predicting results of training.
Two days ago I argued that GPTs would not be an existential risk with someone no matter how extremely they were scaled up, and eventually it turned out that they took the adjectives “generative pretrained” to be separable descriptors, whereas I took them to refer to a narrow specific training method.
For example, several respected thinkers have uttered to me English sentences like “I don’t see what’s educational about watching a line go down for the 50th time” and “Studying modern ML systems to understand future ones seems like studying the neurobiology of flatworms to understand the psychology of aliens.”
These statements are not necessarily (at least by themselves; possibly additional context is missing) examples of discussion about what happens “in the limit of ML training”, as these people may be concerned about the limit of ML architecture development rather than simply training.
Two days ago I argued that GPTs would not be an existential risk with someone no matter how extremely they were scaled up, and eventually it turned out that they took the adjectives “generative pretrained” to be separable descriptors, whereas I took them to refer to a narrow specific training method.
These statements are not necessarily (at least by themselves; possibly additional context is missing) examples of discussion about what happens “in the limit of ML training”, as these people may be concerned about the limit of ML architecture development rather than simply training.