Solomonoff’s theory of induction, along with the AIXI theory of intelligence, operationalize knowledge as the ability to predict observations.
Maybe this is what knowledge is. But I’d like to try coming up with at least one alternative. So here goes!
I want to define knowledge as part of an agent.
A system contains knowledge if the agent who built it can successfully attain its goals in its likely environments by using that system to figure out which of its actions will lead to outcomes the agent wants.
When comparing different systems that allow an agent to achieve its goals, there is a Pareto frontier of how much knowledge is in the system, depending on how it helps in different environments.
A USB stick with blueprints for how to build nukes, in an otherwise lifeless universe, does not contain “knowledge”, because nobody ever “knows” it. (It has knowledge in it the way a tree’s rings do—it contains info that an agent like myself can turn into knowledge.)
I can store my knowledge outside of me. I can write your address down, forget it from my brain, pick it up later, and I still have that knowledge, stored on the paper in my office.
To find out how much knowledge Alice has, I can run her in lots of environments and see what she is able to accomplish by her standards.
Alice “knows” a certain amount about cars if she can use one to drive to the store to buy food. She knows more if she can use a broken car to do the same.
To compare Alice’s knowledge to Bob’s, I can give Alice Bob’s preferences, run Alice in lots of environments, and see what she is able to accomplish by Bob’s standards.
To give Alice’s knowledge to a paperclip maximizer, I ask what a paperclip maximizer wants that Alice can help with. Perhaps Alice knows the location of a steel mine that Clippy doesn’t.
When she can outperform Clippy given the same resources, she knows something Clippy doesn‘t.
To train a system to “extract” knowledge from Alice and “give“ it to Clippy, I need to modify Clippy to do as well in those environments. Then Clippy will “know” what Alice knows.
How do I modify Clippy? I don’t know. So let’s first brute force the heck out of it. Make every physical possible alteration to Clippy, and run each in all possible environments. All those who do as well as Clippy in all environments, and also outperform Alice, have “learned” what Alice knows.
I’d bet there’s a more sensible algorithm to run, but I won’t reach for it now.
===
This was a fun question to answer.
I’m not sure what it would look like to have successfully answered the question, so I can’t tell if I did.
Oli was asking me how to get knowledge from one agent to another yesterday, and my first idea didn’t even have the right type signature, so I wanted to generate another proposal.
I’ll ponder what it would look like to succeed, then I can come back and grade my answer.
Your definition requires that we already know how to modify Alice to have Clippy’s goals. So your brute force idea for how to modify clippy to have Alice’s knowledge doesn’t add very much; it still relies on a magic goal/belief division, so giving a concrete algorithm doesn’t really clarify.
I like this definition too. You might add some sort of distribution over goals (sort of like Attainable Utility) so that e.g. Alice can know things about things that she doesn’t personally care about.
Maybe this is what knowledge is. But I’d like to try coming up with at least one alternative. So here goes!
I want to define knowledge as part of an agent.
A system contains knowledge if the agent who built it can successfully attain its goals in its likely environments by using that system to figure out which of its actions will lead to outcomes the agent wants.
When comparing different systems that allow an agent to achieve its goals, there is a Pareto frontier of how much knowledge is in the system, depending on how it helps in different environments.
A USB stick with blueprints for how to build nukes, in an otherwise lifeless universe, does not contain “knowledge”, because nobody ever “knows” it. (It has knowledge in it the way a tree’s rings do—it contains info that an agent like myself can turn into knowledge.)
I can store my knowledge outside of me. I can write your address down, forget it from my brain, pick it up later, and I still have that knowledge, stored on the paper in my office.
To find out how much knowledge Alice has, I can run her in lots of environments and see what she is able to accomplish by her standards.
Alice “knows” a certain amount about cars if she can use one to drive to the store to buy food. She knows more if she can use a broken car to do the same.
To compare Alice’s knowledge to Bob’s, I can give Alice Bob’s preferences, run Alice in lots of environments, and see what she is able to accomplish by Bob’s standards.
To give Alice’s knowledge to a paperclip maximizer, I ask what a paperclip maximizer wants that Alice can help with. Perhaps Alice knows the location of a steel mine that Clippy doesn’t.
When she can outperform Clippy given the same resources, she knows something Clippy doesn‘t.
To train a system to “extract” knowledge from Alice and “give“ it to Clippy, I need to modify Clippy to do as well in those environments. Then Clippy will “know” what Alice knows.
How do I modify Clippy? I don’t know. So let’s first brute force the heck out of it. Make every physical possible alteration to Clippy, and run each in all possible environments. All those who do as well as Clippy in all environments, and also outperform Alice, have “learned” what Alice knows.
I’d bet there’s a more sensible algorithm to run, but I won’t reach for it now.
===
This was a fun question to answer.
I’m not sure what it would look like to have successfully answered the question, so I can’t tell if I did.
Oli was asking me how to get knowledge from one agent to another yesterday, and my first idea didn’t even have the right type signature, so I wanted to generate another proposal.
I’ll ponder what it would look like to succeed, then I can come back and grade my answer.
Your definition requires that we already know how to modify Alice to have Clippy’s goals. So your brute force idea for how to modify clippy to have Alice’s knowledge doesn’t add very much; it still relies on a magic goal/belief division, so giving a concrete algorithm doesn’t really clarify.
Really good to see this kind of response.
Ah, very good point. How interesting…
(If I’d concretely thought of transferring knowledge between a bird and a dog this would have been obvious.)
I like this definition too. You might add some sort of distribution over goals (sort of like Attainable Utility) so that e.g. Alice can know things about things that she doesn’t personally care about.