Thanks for the reply. I take it that not only are you interested in the idea of knowledge, but that you are particularly interested in the idea of actionable knowledge.
Upon further reflection, I realize that all of the examples and partial definitions I gave in my earlier comment can in fact be summarized in a single, simple definition: a thing X has knowledge of a fact Y iff it contains some (sufficiently simple) representation of Y. (For example, a rock knows about the affairs of humans because it has a representation of those affairs in the form of Fisher information, which is enough simplicity for facts-about-the-world.) Using this definition, it becomes much easier to define actionable knowledge: a thing X has actionable knowledge of a fact Y iff it contains some sufficiently simple representation of Y,andthis representation is so simple that an agent with access to this information could (with sufficiently minimal difficulty) make actions that are based on fact Y. (For example, I have actionable knowledge that 1 + 1 = 2, because my internal representation of this fact is so simple that I can literally type up its statement in a comment.) It also becomes clearer that actionable knowledge and knowledge are not the same (since, for example, the knowledge about the world that a computer that records cryptographic hashes of everything it observes could not be acted upon without breaking the hashes, which is presumably infeasible).
So as for the human psychology/robot vacuum example: If your robot vacuum’s internal representation of human psychology is complex (such as in the form of video recordings of humans only), then it’s not actionable knowledge and your robot vacuum can’t act on it; if it’s sufficiently simple, such as a low-complexity-yet-high-fidelity executable simulation of a human psyche, your robot vacuum can. My intuition also suggests in this case that your robot vacuum’s knowledge of human psychology is actionable iff it has a succinct representation of the natural abstraction of “human psychology” (I think this might be generalizable; i.e. knowledge is actionable iff it’s succinct when described in terms of natural abstractions), and that finding out whether your robot vacuum’s knowledge is sufficiently simple is essentially a matter of interpretability. As for the betting thing, the simple unified definition that I gave in the last paragraph should apply as well.
I very much agree with the emphasis on actionability. But what is it about a physical artifact that makes the knowledge it contains actionable? I don’t think it can be simplicity alone. Suppose I record the trajectory of the moon over many nights by carving markings into a piece of wood. This is a very simple representation, but it does not contain actionable knowledge in the same way that a textbook on Newtonian mechanics does, even if the textbook were represented in a less simple way (say, as a PDF on a computer).
Thanks for the reply. I take it that not only are you interested in the idea of knowledge, but that you are particularly interested in the idea of actionable knowledge.
Upon further reflection, I realize that all of the examples and partial definitions I gave in my earlier comment can in fact be summarized in a single, simple definition: a thing X has knowledge of a fact Y iff it contains some (sufficiently simple) representation of Y. (For example, a rock knows about the affairs of humans because it has a representation of those affairs in the form of Fisher information, which is enough simplicity for facts-about-the-world.) Using this definition, it becomes much easier to define actionable knowledge: a thing X has actionable knowledge of a fact Y iff it contains some sufficiently simple representation of Y, and this representation is so simple that an agent with access to this information could (with sufficiently minimal difficulty) make actions that are based on fact Y. (For example, I have actionable knowledge that 1 + 1 = 2, because my internal representation of this fact is so simple that I can literally type up its statement in a comment.) It also becomes clearer that actionable knowledge and knowledge are not the same (since, for example, the knowledge about the world that a computer that records cryptographic hashes of everything it observes could not be acted upon without breaking the hashes, which is presumably infeasible).
So as for the human psychology/robot vacuum example: If your robot vacuum’s internal representation of human psychology is complex (such as in the form of video recordings of humans only), then it’s not actionable knowledge and your robot vacuum can’t act on it; if it’s sufficiently simple, such as a low-complexity-yet-high-fidelity executable simulation of a human psyche, your robot vacuum can. My intuition also suggests in this case that your robot vacuum’s knowledge of human psychology is actionable iff it has a succinct representation of the natural abstraction of “human psychology” (I think this might be generalizable; i.e. knowledge is actionable iff it’s succinct when described in terms of natural abstractions), and that finding out whether your robot vacuum’s knowledge is sufficiently simple is essentially a matter of interpretability. As for the betting thing, the simple unified definition that I gave in the last paragraph should apply as well.
I very much agree with the emphasis on actionability. But what is it about a physical artifact that makes the knowledge it contains actionable? I don’t think it can be simplicity alone. Suppose I record the trajectory of the moon over many nights by carving markings into a piece of wood. This is a very simple representation, but it does not contain actionable knowledge in the same way that a textbook on Newtonian mechanics does, even if the textbook were represented in a less simple way (say, as a PDF on a computer).