Because I think the word “know”, as used by a human understanding a model, is standing in for a particular kind of mirror-modeling, in which we possess a reproductive model of a thing in our mind which we can use to simulate a behavior, whereas the word “know”, as used by the referent AI, is standing in for “the set of information used to inform the development of a process”.
So an AI which has been trained on a game which it lost can behave “as if it has knowledge of that game”, when in fact the only remnant of that game may be a slightly adjusted parameter, perhaps a connection weighting somewhere is 1% different than it would otherwise be.
In order to “know” what the AI knows, in the sense that it knows it, requires a complete reproduction of the AI state—that is, if you know everything the AI actually knows, as opposed to the information-state that informed the development of the AI, then all you actually know, in that case, is that this particular connection is weighted 1% different; in order to meaningfully apply this knowledge, you must simulate the AI (you must know how all the connections interaction in a holistic sense), in which case you don’t know anything, you’re just asking the AI what it would do, which is not meaningfully knowing what it knows in any useful sense.
Which is basically because it doesn’t actually know anything. Its state is an algorithm, a process; this algorithm could perhaps be dissected, broken down, simplified, and turned into knowledge of how it operates—but this is just another way of simulating and querying a part of the AI; critically, knowing how the AI operates is having knowledge that the AI itself does not actually have.
Because now we are mirror-modeling the AI, and turning what the AI is, which isn’t knowledge, into something else, which is.
I guess it seems to me that you’re claiming that the referent AI isn’t doing any mirror-modelling, but I don’t know why you’d strongly believe this. It seems false about algorithms that use Monte Carlo Tree Search as KataGo does (altho another thread indicates that smart people disagree with me about this), but even for pure neural network models, I’m not sure why one would be confident that it’s false.
Because it’s expensive, slow, and orthogonal to the purpose the AI is actually trying to accomplish.
As a programmer, I take my complicated mirror models, try to figure out how to transform them into sets of numbers, try to figure out how to use one set of those numbers to create another set of those numbers. The mirror modeling is a cognitive step I have to take before I ever start programming an algorithm; it’s helpful for creating algorithms, but useless for actually running them.
Programming languages are judged as helpful in part by how well they do at pretending to be a mirror model, and efficient by how well they completely ignore the mirror model when it comes time to compile/run. There is no program which is made more efficient by representing data internally as the objects the programmers created; efficiency gains are made in compilers by figuring out how to reduce away the unnecessary complexity the programmers created for themselves so they could more easily map their messy intuitions to cold logic.
Why would an AI introduce this step in the middle of its processing?
...but this is just another way of … querying a part of the AI...
I’ve studied Go using AI and have heard others discuss the use of AI in studying Go. Even for professional Go players, the inability for the AI to explain why it gave a higher win rate to a particular move or sequence is a problem.
Even if you could program a tertiary AI which could query the Go playing AI, analyze the calculations the Go playing AI is using to make it’s judgements, and then translate that into english (or another language) so that this tertiary AI could explain why the Go playing AI made a move, I would still disagree that even this hybrid system ‘knew’ how to play Go.
There is a definite difference between ‘calculating’ and ‘reasoning’ such that even a neural network with it’s training I think is really still just one big calculator, not a reasoner.
Because I think the word “know”, as used by a human understanding a model, is standing in for a particular kind of mirror-modeling, in which we possess a reproductive model of a thing in our mind which we can use to simulate a behavior, whereas the word “know”, as used by the referent AI, is standing in for “the set of information used to inform the development of a process”.
So an AI which has been trained on a game which it lost can behave “as if it has knowledge of that game”, when in fact the only remnant of that game may be a slightly adjusted parameter, perhaps a connection weighting somewhere is 1% different than it would otherwise be.
In order to “know” what the AI knows, in the sense that it knows it, requires a complete reproduction of the AI state—that is, if you know everything the AI actually knows, as opposed to the information-state that informed the development of the AI, then all you actually know, in that case, is that this particular connection is weighted 1% different; in order to meaningfully apply this knowledge, you must simulate the AI (you must know how all the connections interaction in a holistic sense), in which case you don’t know anything, you’re just asking the AI what it would do, which is not meaningfully knowing what it knows in any useful sense.
Which is basically because it doesn’t actually know anything. Its state is an algorithm, a process; this algorithm could perhaps be dissected, broken down, simplified, and turned into knowledge of how it operates—but this is just another way of simulating and querying a part of the AI; critically, knowing how the AI operates is having knowledge that the AI itself does not actually have.
Because now we are mirror-modeling the AI, and turning what the AI is, which isn’t knowledge, into something else, which is.
I guess it seems to me that you’re claiming that the referent AI isn’t doing any mirror-modelling, but I don’t know why you’d strongly believe this. It seems false about algorithms that use Monte Carlo Tree Search as KataGo does (altho another thread indicates that smart people disagree with me about this), but even for pure neural network models, I’m not sure why one would be confident that it’s false.
Because it’s expensive, slow, and orthogonal to the purpose the AI is actually trying to accomplish.
As a programmer, I take my complicated mirror models, try to figure out how to transform them into sets of numbers, try to figure out how to use one set of those numbers to create another set of those numbers. The mirror modeling is a cognitive step I have to take before I ever start programming an algorithm; it’s helpful for creating algorithms, but useless for actually running them.
Programming languages are judged as helpful in part by how well they do at pretending to be a mirror model, and efficient by how well they completely ignore the mirror model when it comes time to compile/run. There is no program which is made more efficient by representing data internally as the objects the programmers created; efficiency gains are made in compilers by figuring out how to reduce away the unnecessary complexity the programmers created for themselves so they could more easily map their messy intuitions to cold logic.
Why would an AI introduce this step in the middle of its processing?
I’ve studied Go using AI and have heard others discuss the use of AI in studying Go. Even for professional Go players, the inability for the AI to explain why it gave a higher win rate to a particular move or sequence is a problem.
Even if you could program a tertiary AI which could query the Go playing AI, analyze the calculations the Go playing AI is using to make it’s judgements, and then translate that into english (or another language) so that this tertiary AI could explain why the Go playing AI made a move, I would still disagree that even this hybrid system ‘knew’ how to play Go.
There is a definite difference between ‘calculating’ and ‘reasoning’ such that even a neural network with it’s training I think is really still just one big calculator, not a reasoner.