I don’t think the post acknowledges the main value of developing comprehensible, composable pieces of theory? It’s natural that scientists want to see results constructed from insights rather than opaque black boxes that just say “It is true” or “it is false”, because they can use insights to improve the insight generation process.
You can use each new piece of theory to make a load of new theories, and to discipline and improve the theorist. You can’t use the outputs of of a modeler to improve modelers.
This might be a good definition of… one of the thresholds we expect around AGI: Are you learning about learning, or are you just learning about a specific phenomenon external to yourself? A ML model isn’t learning about learning. It’s not looking at itself.
But human science is always looking at itself because human intelligence was always self-reflective. If you replace parts of science with ML, this self-reflection quality diminishes.
I don’t think this would cover the entirety of science, it would just cover the bits that require statistical tests right now. I agree this is not a way to automate science, but statistical models are in themselves not expalinable beyond what a universal modeler is, they are less, since they introduce fake concepts that don’t map onto reality, this paradigm doesn’t.
I don’t think the post acknowledges the main value of developing comprehensible, composable pieces of theory? It’s natural that scientists want to see results constructed from insights rather than opaque black boxes that just say “It is true” or “it is false”, because they can use insights to improve the insight generation process.
You can use each new piece of theory to make a load of new theories, and to discipline and improve the theorist. You can’t use the outputs of of a modeler to improve modelers.
This might be a good definition of… one of the thresholds we expect around AGI: Are you learning about learning, or are you just learning about a specific phenomenon external to yourself? A ML model isn’t learning about learning. It’s not looking at itself.
But human science is always looking at itself because human intelligence was always self-reflective. If you replace parts of science with ML, this self-reflection quality diminishes.
I don’t think this would cover the entirety of science, it would just cover the bits that require statistical tests right now. I agree this is not a way to automate science, but statistical models are in themselves not expalinable beyond what a universal modeler is, they are less, since they introduce fake concepts that don’t map onto reality, this paradigm doesn’t.