I was wondering if there might have been some insights people had worked out on the way to that—just any parts of such an algorithm that people have figured out, or that at least would reduce the error of a typical scientist.
There are some pretty general learning algorithms, and even ‘meta-learning’ algorithms in the form of tools that attempt to more or less automatically discover the best model (among some number of possibilities). Machine learning hyper-parameter optimization is an example in that direction.
My outside view is that a lot of scientists should focus on running better experiments. According to a possibly apocryphal story told by Richard Feynman in a commencement address, one researcher discovered (at least some of) the controls one had to employ to be able to effectively study mice running mazes. Unfortunately, no one else bothered to employ those controls (let alone look for others)! Similarly, a lot of scientific studies or experiments are simply too small to produce even reliable statistical info. There’s probably a lot of such low hanging fruit available. Tho note that this is often a ‘bottom-up’ contribution for ‘modeling’ a larger complex system.
But as you demonstrate in your last two paragraphs, searching for a better ‘ontology’ for your models, e.g. deciding what else to measure, or what to measure instead, is a seemingly open-ended amount of work! There probably isn’t a way to avoid having to think about it more (beyond making other kinds of things that can think for us); until you find an ontology that’s ‘good enough’ anyways. Regardless, we’re very far from being able to avoid even small amounts of this kind of work.
There are some pretty general learning algorithms, and even ‘meta-learning’ algorithms in the form of tools that attempt to more or less automatically discover the best model (among some number of possibilities). Machine learning hyper-parameter optimization is an example in that direction.
My outside view is that a lot of scientists should focus on running better experiments. According to a possibly apocryphal story told by Richard Feynman in a commencement address, one researcher discovered (at least some of) the controls one had to employ to be able to effectively study mice running mazes. Unfortunately, no one else bothered to employ those controls (let alone look for others)! Similarly, a lot of scientific studies or experiments are simply too small to produce even reliable statistical info. There’s probably a lot of such low hanging fruit available. Tho note that this is often a ‘bottom-up’ contribution for ‘modeling’ a larger complex system.
But as you demonstrate in your last two paragraphs, searching for a better ‘ontology’ for your models, e.g. deciding what else to measure, or what to measure instead, is a seemingly open-ended amount of work! There probably isn’t a way to avoid having to think about it more (beyond making other kinds of things that can think for us); until you find an ontology that’s ‘good enough’ anyways. Regardless, we’re very far from being able to avoid even small amounts of this kind of work.