On the spectrum from stochastic parrot to general reasoner I’m at 70%. We’re definitely closer to a general reasoner than a parrot.
I don’t have a clear answer as to what I expect the outcome to be. I was a physics major and I wish there were less discrete jumps in physics. Special relativity and general relativity seem like giant jumps in terms of their difficulty to derive and there aren’t any intermediate theories. When this project comes out we’ll probably be saying something the AI is 50% between being being able to derive this law and a conceptually harder one.
With respect to something analogous to Newtonian Mechanics I think that it does heavily depend on what kind of information can be observed. If a model can directly observe the equivalents of forces and acceleration than I believe most current models could derive it. If a model can only observe the equivalent of distances between objects and the equivalents of corresponding times and has to derive a second-order relationship from that I suspect that only o3 could do that. In six months, I believe that all frontier models will be able to do that.
Given that Terrence Tao described o1 as a mediocre graduate student it probably won’t be long until frontier models are actually contributing to research and that will be the most valuable feedback. I say all this with a lot of uncertainty and if I’m wrong this project will prove that. Likewise there’s going to be a long period of time where some people will insist that AI can do legitimate automated R&D and others who insist that it can’t. At that point this will be a useful test to argue one way or another.
It would be interesting to vary the amount of information an AI is given until can derive the whole set of equations. For example, see if it can solve for the Maxwell equation given the other 3 and the ability to perform experiments or can it solve for the dynamic version of the equations given only the static ones and the ability to perform experiments.
I hereby grant you 30 Bayes points for registering your beliefs and predictions!
When this project comes out we’ll probably be saying something the AI is 50% between being being able to derive this law and a conceptually harder one.
Can you clarify that a bit? When what project comes out? If you mean mine, I’m confused about why that would say something about the ability to derive special & general relativity.
If a model can directly observe the equivalents of forces and acceleration...
Agreed that each added step of mathematical complexity (in this case from linear to quadratic) will make it harder. I’m less convinced that acceleration being a second-order effect would make an additional difference, since that seems more like a conceptual framework we impose than like a direct property of the data. I’m uncertain about that, though, just speculating.
“Can you clarify that a bit? When what project comes out? If you mean mine, I’m confused about why that would say something about the ability to derive special & general relativity.”
I mean your project. I’m hoping it can allow us to be more precise by ranking models abilities to characterize between well-known systems. Like a model can characterize Special Relativity given what Einstein knew at the time but not General Relativity. If you were to walk along some hypothetical road from SR to GR we might ballpark a model is 30% of the way there. Maybe this project could generate domains that are roughly some x% between SR and GR and validate our estimates.
”Agreed that each added step of mathematical complexity (in this case from linear to quadratic) will make it harder. I’m less convinced that acceleration being a second-order effect would make an additional difference, since that seems more like a conceptual framework we impose than like a direct property of the data.”
Right. The important point is that the equation it needs to find is quadratic instead of linear in the data.
Got it, thanks. We’re planning to try to avoid testing systems that are isomorphic to real-world examples, in the interest of making a crisp distinction between reasoning and knowledge. That said, if we come up with a principled way to characterize system complexity (especially the complexity of the underlying mathematical laws), and if (big if!) that turns out to match what LLMs find harder, then we could certainly compare results to the complexity of real-world laws. I hadn’t considered that, thanks for the idea!
On the spectrum from stochastic parrot to general reasoner I’m at 70%. We’re definitely closer to a general reasoner than a parrot.
I don’t have a clear answer as to what I expect the outcome to be. I was a physics major and I wish there were less discrete jumps in physics. Special relativity and general relativity seem like giant jumps in terms of their difficulty to derive and there aren’t any intermediate theories. When this project comes out we’ll probably be saying something the AI is 50% between being being able to derive this law and a conceptually harder one.
With respect to something analogous to Newtonian Mechanics I think that it does heavily depend on what kind of information can be observed. If a model can directly observe the equivalents of forces and acceleration than I believe most current models could derive it. If a model can only observe the equivalent of distances between objects and the equivalents of corresponding times and has to derive a second-order relationship from that I suspect that only o3 could do that. In six months, I believe that all frontier models will be able to do that.
Given that Terrence Tao described o1 as a mediocre graduate student it probably won’t be long until frontier models are actually contributing to research and that will be the most valuable feedback. I say all this with a lot of uncertainty and if I’m wrong this project will prove that. Likewise there’s going to be a long period of time where some people will insist that AI can do legitimate automated R&D and others who insist that it can’t. At that point this will be a useful test to argue one way or another.
It would be interesting to vary the amount of information an AI is given until can derive the whole set of equations. For example, see if it can solve for the Maxwell equation given the other 3 and the ability to perform experiments or can it solve for the dynamic version of the equations given only the static ones and the ability to perform experiments.
I hereby grant you 30 Bayes points for registering your beliefs and predictions!
Can you clarify that a bit? When what project comes out? If you mean mine, I’m confused about why that would say something about the ability to derive special & general relativity.
Agreed that each added step of mathematical complexity (in this case from linear to quadratic) will make it harder. I’m less convinced that acceleration being a second-order effect would make an additional difference, since that seems more like a conceptual framework we impose than like a direct property of the data. I’m uncertain about that, though, just speculating.
Thanks!
“Can you clarify that a bit? When what project comes out? If you mean mine, I’m confused about why that would say something about the ability to derive special & general relativity.”
I mean your project. I’m hoping it can allow us to be more precise by ranking models abilities to characterize between well-known systems. Like a model can characterize Special Relativity given what Einstein knew at the time but not General Relativity. If you were to walk along some hypothetical road from SR to GR we might ballpark a model is 30% of the way there. Maybe this project could generate domains that are roughly some x% between SR and GR and validate our estimates.
”Agreed that each added step of mathematical complexity (in this case from linear to quadratic) will make it harder. I’m less convinced that acceleration being a second-order effect would make an additional difference, since that seems more like a conceptual framework we impose than like a direct property of the data.”
Right. The important point is that the equation it needs to find is quadratic instead of linear in the data.
Got it, thanks. We’re planning to try to avoid testing systems that are isomorphic to real-world examples, in the interest of making a crisp distinction between reasoning and knowledge. That said, if we come up with a principled way to characterize system complexity (especially the complexity of the underlying mathematical laws), and if (big if!) that turns out to match what LLMs find harder, then we could certainly compare results to the complexity of real-world laws. I hadn’t considered that, thanks for the idea!