I guess it seems to me that you’re claiming that the referent AI isn’t doing any mirror-modelling, but I don’t know why you’d strongly believe this. It seems false about algorithms that use Monte Carlo Tree Search as KataGo does (altho another thread indicates that smart people disagree with me about this), but even for pure neural network models, I’m not sure why one would be confident that it’s false.
Because it’s expensive, slow, and orthogonal to the purpose the AI is actually trying to accomplish.
As a programmer, I take my complicated mirror models, try to figure out how to transform them into sets of numbers, try to figure out how to use one set of those numbers to create another set of those numbers. The mirror modeling is a cognitive step I have to take before I ever start programming an algorithm; it’s helpful for creating algorithms, but useless for actually running them.
Programming languages are judged as helpful in part by how well they do at pretending to be a mirror model, and efficient by how well they completely ignore the mirror model when it comes time to compile/run. There is no program which is made more efficient by representing data internally as the objects the programmers created; efficiency gains are made in compilers by figuring out how to reduce away the unnecessary complexity the programmers created for themselves so they could more easily map their messy intuitions to cold logic.
Why would an AI introduce this step in the middle of its processing?
I guess it seems to me that you’re claiming that the referent AI isn’t doing any mirror-modelling, but I don’t know why you’d strongly believe this. It seems false about algorithms that use Monte Carlo Tree Search as KataGo does (altho another thread indicates that smart people disagree with me about this), but even for pure neural network models, I’m not sure why one would be confident that it’s false.
Because it’s expensive, slow, and orthogonal to the purpose the AI is actually trying to accomplish.
As a programmer, I take my complicated mirror models, try to figure out how to transform them into sets of numbers, try to figure out how to use one set of those numbers to create another set of those numbers. The mirror modeling is a cognitive step I have to take before I ever start programming an algorithm; it’s helpful for creating algorithms, but useless for actually running them.
Programming languages are judged as helpful in part by how well they do at pretending to be a mirror model, and efficient by how well they completely ignore the mirror model when it comes time to compile/run. There is no program which is made more efficient by representing data internally as the objects the programmers created; efficiency gains are made in compilers by figuring out how to reduce away the unnecessary complexity the programmers created for themselves so they could more easily map their messy intuitions to cold logic.
Why would an AI introduce this step in the middle of its processing?