The point is that identical predictive content (in this case, in the sense of predicting what move has the best chance to win the game in each position) does not mean that what’s going on behind the scenes is even similar.
It would be that one thing is intelligent, and one isn’t. That’s how big the differences can be.
So, are you finally ready to concede that your claim is false? The one you were so sure of that identical predictive content of theories means nothing else matters, and that their internal structure can’t be important?
No, because they do different things. If they take different actions this imply’s they must have different predictions (admittedly its a bit anthropomorphic to talk about a chess program having predictions at all).
Incidentally, they are using different predictions to make their moves. For example the human may predict P(my left side is too weak) = 0.9 and use this prediction to derive P(I should move my queen to the left side) = 0.8, while the chess program doesn’t really predict at all but if it did you would see something more like individual predictions for the chance of winning given each possible move, and a derived prediction like P(I should move my queen to the left side) = 0.8.
With such different processes, its really an astonishing coincidence that they make the same moves at all.
(I apologise in advance for my lack of knowledge of how chess players actually think, I haven’t played it since I discovered go, I hope my point is still apparent.)
Your point is apparent—you try to reinterpret all human thinking in terms of probability—but it just isn’t true. There’s lots of books on how to think about chess. They do not advise what you suggest. Many people follow the advice they do give, which is different and unlike what computers do.
People learn explanations like “control the center because it gives your pieces more mobility” and “usually develop knights before bishops because it’s easier to figure out the correct square for them”.
Chess programs do things like count up how many squares each piece on the board can move to. When humans play they don’t count that. They will instead do stuff like think about what squares they consider important and worry about those.
The point is that identical predictive content (in this case, in the sense of predicting what move has the best chance to win the game in each position) does not mean that what’s going on behind the scenes is even similar.
It would be that one thing is intelligent, and one isn’t. That’s how big the differences can be.
So, are you finally ready to concede that your claim is false? The one you were so sure of that identical predictive content of theories means nothing else matters, and that their internal structure can’t be important?
No, because they do different things. If they take different actions this imply’s they must have different predictions (admittedly its a bit anthropomorphic to talk about a chess program having predictions at all).
Incidentally, they are using different predictions to make their moves. For example the human may predict P(my left side is too weak) = 0.9 and use this prediction to derive P(I should move my queen to the left side) = 0.8, while the chess program doesn’t really predict at all but if it did you would see something more like individual predictions for the chance of winning given each possible move, and a derived prediction like P(I should move my queen to the left side) = 0.8.
With such different processes, its really an astonishing coincidence that they make the same moves at all.
(I apologise in advance for my lack of knowledge of how chess players actually think, I haven’t played it since I discovered go, I hope my point is still apparent.)
That’s not how chess players think.
Your point is apparent—you try to reinterpret all human thinking in terms of probability—but it just isn’t true. There’s lots of books on how to think about chess. They do not advise what you suggest. Many people follow the advice they do give, which is different and unlike what computers do.
People learn explanations like “control the center because it gives your pieces more mobility” and “usually develop knights before bishops because it’s easier to figure out the correct square for them”.
Chess programs do things like count up how many squares each piece on the board can move to. When humans play they don’t count that. They will instead do stuff like think about what squares they consider important and worry about those.
Notice how this sentence is actually a prediction in disguise
As is this