If you have a theory, X, which predicts Y, then there are important aspects of X other than Y. There is non-predictive content of X. When you say that those other factors have to do with prediction in some way, that wouldn’t mean that only the predictive content of X matters since the way they have to do with prediction isn’t a prediction X made.
I would say, evaluate X solely on the predictive merits of Y. If we are interested in future research directions then make separate predictions about those.
All knowledge is the same thing (“knowledge”) because it has shared attributes.
A computer program doesn’t really count as knowledge. Its information, in the scientific and mathematical sense, and you write it down, but the similarity ends there. It is a tool that is built to do a job, and in that respect is more like a building. It doesn’t really count as knowledge at all, not to a Bayesian at any rate.
What? Broad reach is a virtue. A theory which applies to many questions—which has some kind of general principle to it—is valuable. Like QM which applies to the entire universe—it is a universal theory, not a narrow theory.
A computer program doesn’t really count as knowledge.
It has apparent design. It has adaptation to a purpose. It’s problem-solving information. (The knowledge is put there by the programmer, but it’s still there.)
I would say, evaluate X solely on the predictive merits of Y. If we are interested in future research directions then make separate predictions about those.
One of the ways this came up is we were considering theories with identical Y, and whether they have any differences that matter. I said they do. Make sense now?
It has apparent design. It has adaptation to a purpose. It’s problem-solving information. (The knowledge is put there by the programmer, but it’s still there.)
In this sense, a building is also knowledge. Programming is making, not discovering.
One of the ways this came up is we were considering theories with identical Y, and whether they have any differences that matter. I said they do. Make sense now?
Suppose two theories A and B make identical predictions for the results of all lab experiments carried out thus far but disagree about directions for future research. I would say they make different predictions about which research directions will lead to success, and are therefore not entirely identical.
Just got an idea for a good example from another thread.
Consider chess. If a human and a chess program come up with the same move, then the differences between them, and their ways of thinking about the move, don’t really matter, do you think?
And suppose we want to learn from them. So we give them both white. We play the same moves against each of them. We end up with identical games, suppose. So, in the particular positions from that game they make identical predictions about what move has the best chance to win.
Now, we also in each case gather some information about why they made each move to learn from.
The computer program provides move list trees it looked at, at every move, with evaluations of the positions they reach.
The human provides explanations. He says things like, “I was worried my queen side wasn’t safe, so i decided i better win on the king side quickly” or “i saw that this was eventually heading towards a closed game with my pawns fixed on dark squares, so that’s why i traded my bishop for a knight there”.
When you want to learn chess, these different kinds of information are both useful, but in different ways. They are different. The differences matter. For a specific person, with specific strengths and weaknesses, one or the other may be far far more useful.
The point is that identical predictive content (in this case, in the sense of predicting what move has the best chance to win the game in each position) does not mean that what’s going on behind the scenes is even similar.
It would be that one thing is intelligent, and one isn’t. That’s how big the differences can be.
So, are you finally ready to concede that your claim is false? The one you were so sure of that identical predictive content of theories means nothing else matters, and that their internal structure can’t be important?
No, because they do different things. If they take different actions this imply’s they must have different predictions (admittedly its a bit anthropomorphic to talk about a chess program having predictions at all).
Incidentally, they are using different predictions to make their moves. For example the human may predict P(my left side is too weak) = 0.9 and use this prediction to derive P(I should move my queen to the left side) = 0.8, while the chess program doesn’t really predict at all but if it did you would see something more like individual predictions for the chance of winning given each possible move, and a derived prediction like P(I should move my queen to the left side) = 0.8.
With such different processes, its really an astonishing coincidence that they make the same moves at all.
(I apologise in advance for my lack of knowledge of how chess players actually think, I haven’t played it since I discovered go, I hope my point is still apparent.)
Your point is apparent—you try to reinterpret all human thinking in terms of probability—but it just isn’t true. There’s lots of books on how to think about chess. They do not advise what you suggest. Many people follow the advice they do give, which is different and unlike what computers do.
People learn explanations like “control the center because it gives your pieces more mobility” and “usually develop knights before bishops because it’s easier to figure out the correct square for them”.
Chess programs do things like count up how many squares each piece on the board can move to. When humans play they don’t count that. They will instead do stuff like think about what squares they consider important and worry about those.
I only write “idea” instead. If you taboo that too, i start writing “conjecture” or “guess” which is misleading in some contexts. Taboo that too and i might have to say “thought” or “believe” or “misconception” which are even more misleading in many contexts.
In this sense, a building is also knowledge.
Yes buildings rely on, and physically embody, engineering knowledge.
Suppose two theories A and B make identical predictions for the results of all lab experiments carried out thus far but disagree about directions for future research. I would say they make different predictions about which research directions will lead to success, and are therefore not entirely identical.
But they don’t make those predictions. they don’t say this stuff, they embody it in their structure. it’s possible for a theory to be more suited to something, but no one knows that, and it wasn’t made that way on purpose.
I only write “idea” instead. If you taboo that too, i start writing “conjecture” or “guess” which is misleading in some contexts. Taboo that too and i might have to say “thought” or “believe” or “misconception” which are even more misleading in many contexts.
You didn’t read the article, and so you are missing the point. In spectacular fashion I might add.
Yes buildings rely on, and physically embody, engineering knowledge.
So, buildings should be made out of bricks, therefore scientific theories should be made out of bricks?
But they don’t make those predictions. they don’t say this stuff, they embody it in their structure. it’s possible for a theory to be more suited to something, but no one knows that, and it wasn’t made that way on purpose.
I contend that a theory can make more predictions than are explicitly written down. Most theories make infinitely many predictions. A logically omniscient Ideal Bayesian would immediately be able to see all those predictions just from looking at the theory, a Human Bayesian may not, but they still exist.
So, buildings should be made out of bricks, therefore scientific theories should be made out of bricks?
2) I meant something else which you didn’t understand
?
Can you specify the infinitely many predictions of the theory “Mary had a little lamb” without missing any I deem important structural issues? Saying the theory “Mary had a little lamb” is not just a prediction but infinitely many predictions is non-standard terminology right? Did you invent this terminology during this argument, or did you always use? Are there articles on it?
Bayesians don’t treat the concept of a theory as being fundamental to epistemology (which is why I wanted to taboo it), so I tried to figure out the closest Bayesian analogue to what you were saying and used that.
As for 1) and 2), I was merely pointing out that “program’s are a type of knowledge, programs should be modular, therefore knowledge should be modular” and “building’s are a type of knowledge, buildings should be made of bricks, therefore knowledge should be made of bricks” are of the same form and equally valid. Since the latter is clearly wrong, I was making the point that the former is also wrong.
To be honest I have never seen a better demonstration of the importance of narrowness than your last few comments, they are exactly the kind of rubbish you end up talking when you make a concept too broad.
What should I do, do you think? I take it you know what my goals are in order to judge this issue. Neat. What are they? Also what’s my reputation like?
I would say, evaluate X solely on the predictive merits of Y. If we are interested in future research directions then make separate predictions about those.
A computer program doesn’t really count as knowledge. Its information, in the scientific and mathematical sense, and you write it down, but the similarity ends there. It is a tool that is built to do a job, and in that respect is more like a building. It doesn’t really count as knowledge at all, not to a Bayesian at any rate.
Remember, narrowness is a virtue.
What? Broad reach is a virtue. A theory which applies to many questions—which has some kind of general principle to it—is valuable. Like QM which applies to the entire universe—it is a universal theory, not a narrow theory.
It has apparent design. It has adaptation to a purpose. It’s problem-solving information. (The knowledge is put there by the programmer, but it’s still there.)
One of the ways this came up is we were considering theories with identical Y, and whether they have any differences that matter. I said they do. Make sense now?
What happens if we taboo the word ‘theory’.
In this sense, a building is also knowledge. Programming is making, not discovering.
Suppose two theories A and B make identical predictions for the results of all lab experiments carried out thus far but disagree about directions for future research. I would say they make different predictions about which research directions will lead to success, and are therefore not entirely identical.
Just got an idea for a good example from another thread.
Consider chess. If a human and a chess program come up with the same move, then the differences between them, and their ways of thinking about the move, don’t really matter, do you think?
And suppose we want to learn from them. So we give them both white. We play the same moves against each of them. We end up with identical games, suppose. So, in the particular positions from that game they make identical predictions about what move has the best chance to win.
Now, we also in each case gather some information about why they made each move to learn from.
The computer program provides move list trees it looked at, at every move, with evaluations of the positions they reach.
The human provides explanations. He says things like, “I was worried my queen side wasn’t safe, so i decided i better win on the king side quickly” or “i saw that this was eventually heading towards a closed game with my pawns fixed on dark squares, so that’s why i traded my bishop for a knight there”.
When you want to learn chess, these different kinds of information are both useful, but in different ways. They are different. The differences matter. For a specific person, with specific strengths and weaknesses, one or the other may be far far more useful.
So, the computer program and the human do different things, and thereby produce different results. Your point?
I was claiming that if they did the same thing they would get the same results.
The point is that identical predictive content (in this case, in the sense of predicting what move has the best chance to win the game in each position) does not mean that what’s going on behind the scenes is even similar.
It would be that one thing is intelligent, and one isn’t. That’s how big the differences can be.
So, are you finally ready to concede that your claim is false? The one you were so sure of that identical predictive content of theories means nothing else matters, and that their internal structure can’t be important?
No, because they do different things. If they take different actions this imply’s they must have different predictions (admittedly its a bit anthropomorphic to talk about a chess program having predictions at all).
Incidentally, they are using different predictions to make their moves. For example the human may predict P(my left side is too weak) = 0.9 and use this prediction to derive P(I should move my queen to the left side) = 0.8, while the chess program doesn’t really predict at all but if it did you would see something more like individual predictions for the chance of winning given each possible move, and a derived prediction like P(I should move my queen to the left side) = 0.8.
With such different processes, its really an astonishing coincidence that they make the same moves at all.
(I apologise in advance for my lack of knowledge of how chess players actually think, I haven’t played it since I discovered go, I hope my point is still apparent.)
That’s not how chess players think.
Your point is apparent—you try to reinterpret all human thinking in terms of probability—but it just isn’t true. There’s lots of books on how to think about chess. They do not advise what you suggest. Many people follow the advice they do give, which is different and unlike what computers do.
People learn explanations like “control the center because it gives your pieces more mobility” and “usually develop knights before bishops because it’s easier to figure out the correct square for them”.
Chess programs do things like count up how many squares each piece on the board can move to. When humans play they don’t count that. They will instead do stuff like think about what squares they consider important and worry about those.
Notice how this sentence is actually a prediction in disguise
As is this
I only write “idea” instead. If you taboo that too, i start writing “conjecture” or “guess” which is misleading in some contexts. Taboo that too and i might have to say “thought” or “believe” or “misconception” which are even more misleading in many contexts.
Yes buildings rely on, and physically embody, engineering knowledge.
But they don’t make those predictions. they don’t say this stuff, they embody it in their structure. it’s possible for a theory to be more suited to something, but no one knows that, and it wasn’t made that way on purpose.
The point of tabooing words is to expand your definitions and remove misunderstandings, not to pick almost synonyms.
You didn’t read the article, and so you are missing the point. In spectacular fashion I might add.
So, buildings should be made out of bricks, therefore scientific theories should be made out of bricks?
I contend that a theory can make more predictions than are explicitly written down. Most theories make infinitely many predictions. A logically omniscient Ideal Bayesian would immediately be able to see all those predictions just from looking at the theory, a Human Bayesian may not, but they still exist.
What do you think is more likely:
1) I meant
2) I meant something else which you didn’t understand
?
Can you specify the infinitely many predictions of the theory “Mary had a little lamb” without missing any I deem important structural issues? Saying the theory “Mary had a little lamb” is not just a prediction but infinitely many predictions is non-standard terminology right? Did you invent this terminology during this argument, or did you always use? Are there articles on it?
Bayesians don’t treat the concept of a theory as being fundamental to epistemology (which is why I wanted to taboo it), so I tried to figure out the closest Bayesian analogue to what you were saying and used that.
As for 1) and 2), I was merely pointing out that “program’s are a type of knowledge, programs should be modular, therefore knowledge should be modular” and “building’s are a type of knowledge, buildings should be made of bricks, therefore knowledge should be made of bricks” are of the same form and equally valid. Since the latter is clearly wrong, I was making the point that the former is also wrong.
To be honest I have never seen a better demonstration of the importance of narrowness than your last few comments, they are exactly the kind of rubbish you end up talking when you make a concept too broad.
I didn’t make that argument. Try to be more careful not to put words into my mouth.
When you have a reputation like curi’s this is exactly the sort of rhetorical question you should avoid asking.
What should I do, do you think? I take it you know what my goals are in order to judge this issue. Neat. What are they? Also what’s my reputation like?