Not sure I see how that can solve the issue of “the truth missing from the hypothesis space”.
Solomonoff Induction contains every possible (computable) hypothesis; so long as you’re in a computable universe (and have logical omniscience), the truth is in your hypothesis space.
But this is sort of the trivial solution, because while it’s guaranteed to have the right answer it had to bring in a truly staggering number of wrong answers to get it. It looks like what people do is notice when their models are being surprisingly bad, and then explicitly attempt to generate alternative models to expand their hypothesis space.
(You can actually do this in a principled statistical way; you can track, for example, whether or not you would have converged to the right answer by now if the true answer were in your hypothesis space, and call for a halt when it becomes sufficiently unlikely.)
Most of the immediate examples that jump to mind are mathematical, but that probably doesn’t count as concrete. If you have a doctor trying to treat patients, they might suspect that if they actually had the right set of possible conditions, they would be able to apply a short flowchart to determine the correct treatment, apply it, and then the issues would be resolved. And so when building that flowchart (i.e. the hypothesis space of what conditions the patient might have), they’ll notice when they find too many patients who aren’t getting better, or when it’s surprisingly difficult to classify patients.
If people with disease A cough and don’t have headaches, and people with disease B have headaches and don’t cough, on observing a patient who both coughs and has a headache the doctor might think “hmm, I probably need to make a new cluster” instead of “Ah, someone with both A and B.”
I read the article and I have to say that the approach is fascinating in its scale and vision. And I can see how it might lead to interesting applications in computer science. But, in its current state, as an algorithm for a human mind.. I have to admit that I can not justify investing the time for even attempting to apply it.
Solomonoff Induction contains every possible (computable) hypothesis; so long as you’re in a computable universe (and have logical omniscience), the truth is in your hypothesis space.
But this is sort of the trivial solution, because while it’s guaranteed to have the right answer it had to bring in a truly staggering number of wrong answers to get it. It looks like what people do is notice when their models are being surprisingly bad, and then explicitly attempt to generate alternative models to expand their hypothesis space.
(You can actually do this in a principled statistical way; you can track, for example, whether or not you would have converged to the right answer by now if the true answer were in your hypothesis space, and call for a halt when it becomes sufficiently unlikely.)
Most of the immediate examples that jump to mind are mathematical, but that probably doesn’t count as concrete. If you have a doctor trying to treat patients, they might suspect that if they actually had the right set of possible conditions, they would be able to apply a short flowchart to determine the correct treatment, apply it, and then the issues would be resolved. And so when building that flowchart (i.e. the hypothesis space of what conditions the patient might have), they’ll notice when they find too many patients who aren’t getting better, or when it’s surprisingly difficult to classify patients.
If people with disease A cough and don’t have headaches, and people with disease B have headaches and don’t cough, on observing a patient who both coughs and has a headache the doctor might think “hmm, I probably need to make a new cluster” instead of “Ah, someone with both A and B.”
I read the article and I have to say that the approach is fascinating in its scale and vision. And I can see how it might lead to interesting applications in computer science. But, in its current state, as an algorithm for a human mind.. I have to admit that I can not justify investing the time for even attempting to apply it.
Thank you for all the info! :)