for that to be not true, it would mean that: for any (or some portion of?) task(s), the only way to solve it is through something like a learning/training process (in the AI sense), or other search-process-involving-checking. it would mean that there’s no ‘reason’ behind the solution being what it is, it’s just a {mathematical/logical/algorithmic/other isomorphism} coincidence.
for it to be true, i guess it would mean that there’s another procedure ({function/program}) that can deduce the solution in a more ‘principled’[1] way (could be more or less efficient)
more practically, it being not true would be troubling for strategies based on ‘create the ideal intelligence-procedure and use it as an oracle [or place it in a formal-value-containing hardcoded-structure that uses it like an oracle]’
why do i think it’s possible for it to be not true? because we currently observe training processes succeeding, but don’t yet know of an ideal procedure[2]. that’s all. a mere possibility, not a ‘positive argument’.
in case anyone thinks ‘bayes theorem / solomonoff induction!’ - bayes theorem isn’t it, because, for example, it doesn’t alone tell you how to solve a maze. i can try to elaborate if needed
I think there’s no need to think of “training/learning” algorithms as absolutely distinct from “principled” algorithms. It’s just that the understanding of why deep learning works is a little weak, so we don’t know how to view it in a principled way.
the understanding of why deep learning works is a little weak, so we don’t know how to view it in a principled way.
It sounds like you’re saying, “deep learning itself is actually approximating some more ideal process.” (I have no comments on that, but I find it interesting to think about what that process would be, and what its safety-relevant properties would be)
i am kind of worried by the possibility that this is not true: there is an ‘ideal procedure for figuring out what is true’.
for that to be not true, it would mean that: for any (or some portion of?) task(s), the only way to solve it is through something like a learning/training process (in the AI sense), or other search-process-involving-checking. it would mean that there’s no ‘reason’ behind the solution being what it is, it’s just a {mathematical/logical/algorithmic/other isomorphism} coincidence.
for it to be true, i guess it would mean that there’s another procedure ({function/program}) that can deduce the solution in a more ‘principled’[1] way (could be more or less efficient)
more practically, it being not true would be troubling for strategies based on ‘create the ideal intelligence-procedure and use it as an oracle [or place it in a formal-value-containing hardcoded-structure that uses it like an oracle]’
why do i think it’s possible for it to be not true? because we currently observe training processes succeeding, but don’t yet know of an ideal procedure[2]. that’s all. a mere possibility, not a ‘positive argument’.
i don’t know exactly what i mean by this
in case anyone thinks ‘bayes theorem / solomonoff induction!’ - bayes theorem isn’t it, because, for example, it doesn’t alone tell you how to solve a maze. i can try to elaborate if needed
I think there’s no need to think of “training/learning” algorithms as absolutely distinct from “principled” algorithms. It’s just that the understanding of why deep learning works is a little weak, so we don’t know how to view it in a principled way.
It sounds like you’re saying, “deep learning itself is actually approximating some more ideal process.” (I have no comments on that, but I find it interesting to think about what that process would be, and what its safety-relevant properties would be)