If an AI doesn’t fully ‘understand’ the physics concept of “superradiance” based on all existing human writing, how would it generate synthetic data to get better?
I think “doesn’t fully understand the concept of superradiance” is a phrase that smuggles in too many assumptions here. If you rephrase it as “can determine when superradiance will occur, but makes inaccurate predictions about physical systems will do in those situations” / “makes imprecise predictions in such cases” / “has trouble distinguishing cases where superradiance will occur vs cases where it will not”, all of those suggest pretty obvious ways of generating training data.
GPT-4 can already “figure out a new system on the fly” in the sense of taking some repeatable phenomenon it can observe, and predicting things about that phenomenon, because it can write standard machine learning pipelines, design APIs with documentation, and interact with documented APIs. However, the process of doing that is very slow and expensive, and resembles “build a tool and then use the tool” rather than “augment its own native intelligence”.
Which makes sense. The story of human capabilities advances doesn’t look like “find clever ways to configure unprocess rocks and branches from the environment in ways which accomplish our goals”, it looks like “build a bunch of tools, and figure out which ones are most useful and how they are best used, and then use our best tools to build better tools, and so on, and then use the much-improved tools to do the things we want”.
I think “doesn’t fully understand the concept of superradiance” is a phrase that smuggles in too many assumptions here. If you rephrase it as “can determine when superradiance will occur, but makes inaccurate predictions about physical systems will do in those situations” / “makes imprecise predictions in such cases” / “has trouble distinguishing cases where superradiance will occur vs cases where it will not”, all of those suggest pretty obvious ways of generating training data.
GPT-4 can already “figure out a new system on the fly” in the sense of taking some repeatable phenomenon it can observe, and predicting things about that phenomenon, because it can write standard machine learning pipelines, design APIs with documentation, and interact with documented APIs. However, the process of doing that is very slow and expensive, and resembles “build a tool and then use the tool” rather than “augment its own native intelligence”.
Which makes sense. The story of human capabilities advances doesn’t look like “find clever ways to configure unprocess rocks and branches from the environment in ways which accomplish our goals”, it looks like “build a bunch of tools, and figure out which ones are most useful and how they are best used, and then use our best tools to build better tools, and so on, and then use the much-improved tools to do the things we want”.