But a textbook on its own is not capable of making predictions.
If a textbook says ‘if you roll a die (see Figure 8) following this procedure (see Figures 9), it has a 1⁄6 chance of coming up a 6, 5, 4, etc.’ that is a prediction’.
But then if...[then] is it really the case that any ordinary rock contains just as much knowledge as a chemistry textbook?
a) There’s readability. (Which is a property of an observer.)
b) The premise seems unlikely. What can a calcium rock teach you about uranium?
and that what happens in between follows no fundamental rhyme or reason, is entirely a matter of what works.
This may be right in the sense that, ‘knowledge’ need not follow such a ‘rhyme or reason’.
probability theory constrains mind design space in a way that is not merely a set of engineering tricks that “just work”.
But ‘engineering tricks that just work’ may arrive at operating in a similar fashion. Evolution might not quite be well described as ‘a process of trial and error’, but ‘engineering tricks that just work (sometimes)’ kind of describes us.
What we are seeking is a general understanding of the physical phenomenon of the collection and organization of evidence into a form that is conducive to planning.
People are notable for planning and acting. (You might find studying animals which act useful as well, especially because they are (or might be) less complex, and easier to understand.)
Ways of getting things done
seem to necessarily have to have a correspondence in order to succeed. However, learning seems to mess with the idea of a static correspondence in much the same way as a dynamic (changing) world (which means that past correspondence can slip as time goes forward). Someone can start doing something having the wrong idea, figure it out along the way, and fix the plan, and successfully achieve what they were trying to do—despite starting out with the wrong idea—if they learn. (But this definition of learning might be circular.)
One kind of knowledge that seems relevant, is knowledge which is broadly applicable.
I would prefer to say that a textbook doesn’t make predictions. It may encode some information in a way that allows an agent to make a prediction. I’m open to that being semantic hair-splitting, but I think there is a useful distinction to be made between “making predictions” (as an action taken by an agent), and “having some representation of a prediction encoded in it” (a possibly static property that depends upon interpretation).
But then, that just pushes the distinction back a little: what is an agent? Per common usage, it is something that can “decide to act”. In this context we presumably also want to extend this to entities that can only “act” in the sense of accepting or rejecting beliefs (such as the favourite “brain in a jar”).
I think one distinguishing property we might ascribe even to the brain-in-a-jar is the likelihood that its decisions could affect the rest of the world in the gross material way we’re accustomed to thinking about. Even one neuron of input or output being “hooked up” could suffice in principle. It’s a lot harder to see how the internal states of a lump of rock could be “hooked up” in any corresponding manner without essentially subsuming it into something that we already think of as an agent.
If I write “The sun will explode in the year 5 billion AD” on a rock, the
possibly static property that depends upon interpretation
is that it says “The sun will explode in the year 5 billion AD”, and the ‘dependency on interpretation’ is ’the ability to read English.
a textbook doesn’t make predictions.
‘Technically true’ in that it may encode a record of past predictions by agents in addition to
encod[ing] some information in a way that allows an agent to make a prediction.
2)
Give the brain a voice, a body, or hook it up to sensors that detect what it thinks. The last option may not be what we think of as control, and yet (given further, feedback, visual or otherwise), one (such as a brain, in theory) may learn to control things.
3)
It’s a lot harder to see how the internal states of a lump of rock could be “hooked up” in any corresponding manner without essentially subsuming it into something that we already think of as an agent.
Break it up, extract those rare earth metals, make a computer. Is it an agent now?
This post is a little too extreme.
If a textbook says ‘if you roll a die (see Figure 8) following this procedure (see Figures 9), it has a 1⁄6 chance of coming up a 6, 5, 4, etc.’ that is a prediction’.
a) There’s readability. (Which is a property of an observer.)
b) The premise seems unlikely. What can a calcium rock teach you about uranium?
This may be right in the sense that, ‘knowledge’ need not follow such a ‘rhyme or reason’.
But ‘engineering tricks that just work’ may arrive at operating in a similar fashion. Evolution might not quite be well described as ‘a process of trial and error’, but ‘engineering tricks that just work (sometimes)’ kind of describes us.
People are notable for planning and acting. (You might find studying animals which act useful as well, especially because they are (or might be) less complex, and easier to understand.)
seem to necessarily have to have a correspondence in order to succeed. However, learning seems to mess with the idea of a static correspondence in much the same way as a dynamic (changing) world (which means that past correspondence can slip as time goes forward). Someone can start doing something having the wrong idea, figure it out along the way, and fix the plan, and successfully achieve what they were trying to do—despite starting out with the wrong idea—if they learn. (But this definition of learning might be circular.)
One kind of knowledge that seems relevant, is knowledge which is broadly applicable.
I would prefer to say that a textbook doesn’t make predictions. It may encode some information in a way that allows an agent to make a prediction. I’m open to that being semantic hair-splitting, but I think there is a useful distinction to be made between “making predictions” (as an action taken by an agent), and “having some representation of a prediction encoded in it” (a possibly static property that depends upon interpretation).
But then, that just pushes the distinction back a little: what is an agent? Per common usage, it is something that can “decide to act”. In this context we presumably also want to extend this to entities that can only “act” in the sense of accepting or rejecting beliefs (such as the favourite “brain in a jar”).
I think one distinguishing property we might ascribe even to the brain-in-a-jar is the likelihood that its decisions could affect the rest of the world in the gross material way we’re accustomed to thinking about. Even one neuron of input or output being “hooked up” could suffice in principle. It’s a lot harder to see how the internal states of a lump of rock could be “hooked up” in any corresponding manner without essentially subsuming it into something that we already think of as an agent.
Response broken up by paragraphs:
1)
If I write “The sun will explode in the year 5 billion AD” on a rock, the
is that it says “The sun will explode in the year 5 billion AD”, and the ‘dependency on interpretation’ is ’the ability to read English.
‘Technically true’ in that it may encode a record of past predictions by agents in addition to
2)
Give the brain a voice, a body, or hook it up to sensors that detect what it thinks. The last option may not be what we think of as control, and yet (given further, feedback, visual or otherwise), one (such as a brain, in theory) may learn to control things.
3)
Break it up, extract those rare earth metals, make a computer. Is it an agent now?