“No Free Lunch” (NFL) results in machine learning (ML) basically say that success all comes down to having a good prior.
So we know that we need a sufficiently good prior in order to succeed.
But we don’t know what “sufficiently good” means.
e.g. I’ve heard speculation that maybe we can use 2^-MDL in any widely used Turing-complete programming language (e.g. Python) for our prior, and that will give enough information about our particular physics for something AIXI-like to become superintelligent e.g. within our lifetime.
Or maybe we can’t get anywhere without a much better prior.
DOES ANYONE KNOW of any work/(intelligent thoughts) on this?
Although it’s not framed this way, I think much of the disagreement about timelines/scaling-hypothesis/deep-learning in the ML community basically comes down to this...
“No Free Lunch” (NFL) results in machine learning (ML) basically say that success all comes down to having a good prior.
So we know that we need a sufficiently good prior in order to succeed.
But we don’t know what “sufficiently good” means.
e.g. I’ve heard speculation that maybe we can use 2^-MDL in any widely used Turing-complete programming language (e.g. Python) for our prior, and that will give enough information about our particular physics for something AIXI-like to become superintelligent e.g. within our lifetime.
Or maybe we can’t get anywhere without a much better prior.
DOES ANYONE KNOW of any work/(intelligent thoughts) on this?
Although it’s not framed this way, I think much of the disagreement about timelines/scaling-hypothesis/deep-learning in the ML community basically comes down to this...