Just got an idea for a good example from another thread.
Consider chess. If a human and a chess program come up with the same move, then the differences between them, and their ways of thinking about the move, don’t really matter, do you think?
And suppose we want to learn from them. So we give them both white. We play the same moves against each of them. We end up with identical games, suppose. So, in the particular positions from that game they make identical predictions about what move has the best chance to win.
Now, we also in each case gather some information about why they made each move to learn from.
The computer program provides move list trees it looked at, at every move, with evaluations of the positions they reach.
The human provides explanations. He says things like, “I was worried my queen side wasn’t safe, so i decided i better win on the king side quickly” or “i saw that this was eventually heading towards a closed game with my pawns fixed on dark squares, so that’s why i traded my bishop for a knight there”.
When you want to learn chess, these different kinds of information are both useful, but in different ways. They are different. The differences matter. For a specific person, with specific strengths and weaknesses, one or the other may be far far more useful.
The point is that identical predictive content (in this case, in the sense of predicting what move has the best chance to win the game in each position) does not mean that what’s going on behind the scenes is even similar.
It would be that one thing is intelligent, and one isn’t. That’s how big the differences can be.
So, are you finally ready to concede that your claim is false? The one you were so sure of that identical predictive content of theories means nothing else matters, and that their internal structure can’t be important?
No, because they do different things. If they take different actions this imply’s they must have different predictions (admittedly its a bit anthropomorphic to talk about a chess program having predictions at all).
Incidentally, they are using different predictions to make their moves. For example the human may predict P(my left side is too weak) = 0.9 and use this prediction to derive P(I should move my queen to the left side) = 0.8, while the chess program doesn’t really predict at all but if it did you would see something more like individual predictions for the chance of winning given each possible move, and a derived prediction like P(I should move my queen to the left side) = 0.8.
With such different processes, its really an astonishing coincidence that they make the same moves at all.
(I apologise in advance for my lack of knowledge of how chess players actually think, I haven’t played it since I discovered go, I hope my point is still apparent.)
Your point is apparent—you try to reinterpret all human thinking in terms of probability—but it just isn’t true. There’s lots of books on how to think about chess. They do not advise what you suggest. Many people follow the advice they do give, which is different and unlike what computers do.
People learn explanations like “control the center because it gives your pieces more mobility” and “usually develop knights before bishops because it’s easier to figure out the correct square for them”.
Chess programs do things like count up how many squares each piece on the board can move to. When humans play they don’t count that. They will instead do stuff like think about what squares they consider important and worry about those.
Just got an idea for a good example from another thread.
Consider chess. If a human and a chess program come up with the same move, then the differences between them, and their ways of thinking about the move, don’t really matter, do you think?
And suppose we want to learn from them. So we give them both white. We play the same moves against each of them. We end up with identical games, suppose. So, in the particular positions from that game they make identical predictions about what move has the best chance to win.
Now, we also in each case gather some information about why they made each move to learn from.
The computer program provides move list trees it looked at, at every move, with evaluations of the positions they reach.
The human provides explanations. He says things like, “I was worried my queen side wasn’t safe, so i decided i better win on the king side quickly” or “i saw that this was eventually heading towards a closed game with my pawns fixed on dark squares, so that’s why i traded my bishop for a knight there”.
When you want to learn chess, these different kinds of information are both useful, but in different ways. They are different. The differences matter. For a specific person, with specific strengths and weaknesses, one or the other may be far far more useful.
So, the computer program and the human do different things, and thereby produce different results. Your point?
I was claiming that if they did the same thing they would get the same results.
The point is that identical predictive content (in this case, in the sense of predicting what move has the best chance to win the game in each position) does not mean that what’s going on behind the scenes is even similar.
It would be that one thing is intelligent, and one isn’t. That’s how big the differences can be.
So, are you finally ready to concede that your claim is false? The one you were so sure of that identical predictive content of theories means nothing else matters, and that their internal structure can’t be important?
No, because they do different things. If they take different actions this imply’s they must have different predictions (admittedly its a bit anthropomorphic to talk about a chess program having predictions at all).
Incidentally, they are using different predictions to make their moves. For example the human may predict P(my left side is too weak) = 0.9 and use this prediction to derive P(I should move my queen to the left side) = 0.8, while the chess program doesn’t really predict at all but if it did you would see something more like individual predictions for the chance of winning given each possible move, and a derived prediction like P(I should move my queen to the left side) = 0.8.
With such different processes, its really an astonishing coincidence that they make the same moves at all.
(I apologise in advance for my lack of knowledge of how chess players actually think, I haven’t played it since I discovered go, I hope my point is still apparent.)
That’s not how chess players think.
Your point is apparent—you try to reinterpret all human thinking in terms of probability—but it just isn’t true. There’s lots of books on how to think about chess. They do not advise what you suggest. Many people follow the advice they do give, which is different and unlike what computers do.
People learn explanations like “control the center because it gives your pieces more mobility” and “usually develop knights before bishops because it’s easier to figure out the correct square for them”.
Chess programs do things like count up how many squares each piece on the board can move to. When humans play they don’t count that. They will instead do stuff like think about what squares they consider important and worry about those.
Notice how this sentence is actually a prediction in disguise
As is this