Could we try like an ‘agent relative’ definition of knowledge accumulation?
e.g. Knowledge about X (e.g. the shape of the coastline) is accumulating in region R (e.g. the parchment) accessibly for an agent A (e.g. a human navigator) to the extent that agent A is able to condition its behaviour on X by observing R and not X directly. (This is borrowing from the Cartesian Frames definition of an ‘observable’ being something the agent can condition on).
If we want to break this down to lower level concepts than ‘agents’ and ‘conditioning behaviour’ and ‘observing’, we could say something roughly like (though this is much more unwieldy):
X is some feature of the system (e.g. shape of coastline).
R is some region of the system (e.g. the parchment).
A is some entity in the system which can ‘behave’ in different ways (over time) (e.g. the helmsman who can turn the ship’s wheel over time (‘over time’ in the sense that they don’t just have single the option to ‘turn right’ or ‘turn left’ once, rather they have the option to ‘turn right for thirty minutes, then turn left for twenty minutes, then...’ or some other trajectory)
Definition for ‘conditioning on’: We say A is ‘conditioning on’ R if: changing Rcauses a change in A’s behaviour (i.e. if we perturbR (e.g. change the map) then A changes (e.g. the steering changes).) So just a Pearlian notion of causality I think.
An intermediate concept: We say A is ‘utilising the knowledge in R about X’ if: 1. A is conditioning on R (e.g. the helmsman is condition their steering on the content of the parchment) and 2. There exists some basin of attraction B which goes to some target set T (e.g. B is some wide range of ways the world can be, and T is ‘the ship ends up at this village by this time’) and if A were not conditioning on R then B would be smaller (if the helmsman were not steering according to the map then they would only end up at the village on time in far fewer worlds), and 3. If A were to also condition on X, this would not expand B much (e.g. seeing the shape of the coastline once you can already read the map doesn’t help you much), but 4. IF A were not conditioning on R, then conditioning on X would expand B a lot more (e.g. if you couldn’t steer by the map, then seeing the shape of the coastline would help you a lot). (You could also put all this in terms of utility functions instead of target sets I reckon, but the target set approach seemed easier for this sketch).
So we’ve defined what it means for A to ‘utilise the knowledge in R about X’, but what we really want is to say what it means for A to be able to utilise knowledge in X about R, because when A is able to utilise knowledge in X about R, we can say that R contains knowledges about X accesibly for A. (e.g. if the map is not on the ship, the helmsman will not be utilising its knowledge, but in some sense they ‘could’ and thus we would still say the map contains the knowledge)
But now I find that it’s far past my bedtime and I’m too sleepy to work out this final step haha! Maybe it’s something like that R contains knowledge about X accessibly to R ‘if we can, without much change to R or A, cause A to utilise the knowledge in R about X’ (e.g. just by moving the map onto the ship, and not changing anything else, we can cause the helmsman to utilise the knowledge in the map). Though a clear problem here is: what if A is not ‘trying’ to achieve a goal that requires the knowledge on the map? (e.g. if helmsman were on the other side of the world trying to navigate somewhere else there, then they wouldn’t utilise the knowledge in this map because it wouldnt be relevant). In this case it seems we cant cant A to utilise the knowledge in R about X ‘without much change to R or A’—we would need to change A to change A’s goal to make it utilise the knowledge in R. Hmm.....
One thing I like about this approach is that when R does have information about X but it’s not in a very ‘action ready’ or ‘easily usable’ form (e.g. if R is a disk of 10,000 hours of video taken by ships, which you could use to eventually work out the shape of the coastline) then I think this approach would say that R does contain knowledge about X (accessibly to A) to some degree but less so than if it just directly gave the shape of the coastline. What makes this approach say this? Because in the “10,000 hours of footage” case, the agent is less able to condition its behaviour on X by observing R (which is the ‘definition’ of knowledge under this approach)-- because A has to first do all the work of watching through the footage and extracting/calculating the relevant knowledge before it can use it, and so therefore in all that time when it is doing this processing it cannot yet condition its behaviour on X by observing R, so overall over time its behaviour is ‘less conditioned’ on X via R.
Anyway curious to hear your thoughts about this approach, I might get to finish filling it out another time!
Interesting sequence so far!
Could we try like an ‘agent relative’ definition of knowledge accumulation?
e.g. Knowledge about X (e.g. the shape of the coastline) is accumulating in region R (e.g. the parchment) accessibly for an agent A (e.g. a human navigator) to the extent that agent A is able to condition its behaviour on X by observing R and not X directly. (This is borrowing from the Cartesian Frames definition of an ‘observable’ being something the agent can condition on).
If we want to break this down to lower level concepts than ‘agents’ and ‘conditioning behaviour’ and ‘observing’, we could say something roughly like (though this is much more unwieldy):
X is some feature of the system (e.g. shape of coastline).
R is some region of the system (e.g. the parchment).
A is some entity in the system which can ‘behave’ in different ways (over time) (e.g. the helmsman who can turn the ship’s wheel over time (‘over time’ in the sense that they don’t just have single the option to ‘turn right’ or ‘turn left’ once, rather they have the option to ‘turn right for thirty minutes, then turn left for twenty minutes, then...’ or some other trajectory)
Definition for ‘conditioning on’: We say A is ‘conditioning on’ R if: changing R causes a change in A’s behaviour (i.e. if we perturbR (e.g. change the map) then A changes (e.g. the steering changes).) So just a Pearlian notion of causality I think.
An intermediate concept: We say A is ‘utilising the knowledge in R about X’ if: 1. A is conditioning on R (e.g. the helmsman is condition their steering on the content of the parchment) and 2. There exists some basin of attraction B which goes to some target set T (e.g. B is some wide range of ways the world can be, and T is ‘the ship ends up at this village by this time’) and if A were not conditioning on R then B would be smaller (if the helmsman were not steering according to the map then they would only end up at the village on time in far fewer worlds), and 3. If A were to also condition on X, this would not expand B much (e.g. seeing the shape of the coastline once you can already read the map doesn’t help you much), but 4. IF A were not conditioning on R, then conditioning on X would expand B a lot more (e.g. if you couldn’t steer by the map, then seeing the shape of the coastline would help you a lot). (You could also put all this in terms of utility functions instead of target sets I reckon, but the target set approach seemed easier for this sketch).
So we’ve defined what it means for A to ‘utilise the knowledge in R about X’, but what we really want is to say what it means for A to be able to utilise knowledge in X about R, because when A is able to utilise knowledge in X about R, we can say that R contains knowledges about X accesibly for A. (e.g. if the map is not on the ship, the helmsman will not be utilising its knowledge, but in some sense they ‘could’ and thus we would still say the map contains the knowledge)
But now I find that it’s far past my bedtime and I’m too sleepy to work out this final step haha! Maybe it’s something like that R contains knowledge about X accessibly to R ‘if we can, without much change to R or A, cause A to utilise the knowledge in R about X’ (e.g. just by moving the map onto the ship, and not changing anything else, we can cause the helmsman to utilise the knowledge in the map). Though a clear problem here is: what if A is not ‘trying’ to achieve a goal that requires the knowledge on the map? (e.g. if helmsman were on the other side of the world trying to navigate somewhere else there, then they wouldn’t utilise the knowledge in this map because it wouldnt be relevant). In this case it seems we cant cant A to utilise the knowledge in R about X ‘without much change to R or A’—we would need to change A to change A’s goal to make it utilise the knowledge in R. Hmm.....
One thing I like about this approach is that when R does have information about X but it’s not in a very ‘action ready’ or ‘easily usable’ form (e.g. if R is a disk of 10,000 hours of video taken by ships, which you could use to eventually work out the shape of the coastline) then I think this approach would say that R does contain knowledge about X (accessibly to A) to some degree but less so than if it just directly gave the shape of the coastline. What makes this approach say this? Because in the “10,000 hours of footage” case, the agent is less able to condition its behaviour on X by observing R (which is the ‘definition’ of knowledge under this approach)-- because A has to first do all the work of watching through the footage and extracting/calculating the relevant knowledge before it can use it, and so therefore in all that time when it is doing this processing it cannot yet condition its behaviour on X by observing R, so overall over time its behaviour is ‘less conditioned’ on X via R.
Anyway curious to hear your thoughts about this approach, I might get to finish filling it out another time!