I would prefer to say that a textbook doesn’t make predictions. It may encode some information in a way that allows an agent to make a prediction. I’m open to that being semantic hair-splitting, but I think there is a useful distinction to be made between “making predictions” (as an action taken by an agent), and “having some representation of a prediction encoded in it” (a possibly static property that depends upon interpretation).
But then, that just pushes the distinction back a little: what is an agent? Per common usage, it is something that can “decide to act”. In this context we presumably also want to extend this to entities that can only “act” in the sense of accepting or rejecting beliefs (such as the favourite “brain in a jar”).
I think one distinguishing property we might ascribe even to the brain-in-a-jar is the likelihood that its decisions could affect the rest of the world in the gross material way we’re accustomed to thinking about. Even one neuron of input or output being “hooked up” could suffice in principle. It’s a lot harder to see how the internal states of a lump of rock could be “hooked up” in any corresponding manner without essentially subsuming it into something that we already think of as an agent.
If I write “The sun will explode in the year 5 billion AD” on a rock, the
possibly static property that depends upon interpretation
is that it says “The sun will explode in the year 5 billion AD”, and the ‘dependency on interpretation’ is ’the ability to read English.
a textbook doesn’t make predictions.
‘Technically true’ in that it may encode a record of past predictions by agents in addition to
encod[ing] some information in a way that allows an agent to make a prediction.
2)
Give the brain a voice, a body, or hook it up to sensors that detect what it thinks. The last option may not be what we think of as control, and yet (given further, feedback, visual or otherwise), one (such as a brain, in theory) may learn to control things.
3)
It’s a lot harder to see how the internal states of a lump of rock could be “hooked up” in any corresponding manner without essentially subsuming it into something that we already think of as an agent.
Break it up, extract those rare earth metals, make a computer. Is it an agent now?
I would prefer to say that a textbook doesn’t make predictions. It may encode some information in a way that allows an agent to make a prediction. I’m open to that being semantic hair-splitting, but I think there is a useful distinction to be made between “making predictions” (as an action taken by an agent), and “having some representation of a prediction encoded in it” (a possibly static property that depends upon interpretation).
But then, that just pushes the distinction back a little: what is an agent? Per common usage, it is something that can “decide to act”. In this context we presumably also want to extend this to entities that can only “act” in the sense of accepting or rejecting beliefs (such as the favourite “brain in a jar”).
I think one distinguishing property we might ascribe even to the brain-in-a-jar is the likelihood that its decisions could affect the rest of the world in the gross material way we’re accustomed to thinking about. Even one neuron of input or output being “hooked up” could suffice in principle. It’s a lot harder to see how the internal states of a lump of rock could be “hooked up” in any corresponding manner without essentially subsuming it into something that we already think of as an agent.
Response broken up by paragraphs:
1)
If I write “The sun will explode in the year 5 billion AD” on a rock, the
is that it says “The sun will explode in the year 5 billion AD”, and the ‘dependency on interpretation’ is ’the ability to read English.
‘Technically true’ in that it may encode a record of past predictions by agents in addition to
2)
Give the brain a voice, a body, or hook it up to sensors that detect what it thinks. The last option may not be what we think of as control, and yet (given further, feedback, visual or otherwise), one (such as a brain, in theory) may learn to control things.
3)
Break it up, extract those rare earth metals, make a computer. Is it an agent now?