Problem: A sailing ship that is drawing a map of a coastline but sinks before the map is ever used by anyone to take action would not be accumulating knowledge by this definition, yet does in fact seem to be accumulating knowledge.
Doesn’t seem to problematic. It was acquiring information. If you are acquiring info, and then you die, then yes, knowledge may (and probably will) be lost.
Well if I learn that my robot vacuum is unexpectedly building a model of human psychology then I’m concerned whether or not it in fact acts on that model, which means that I really want to define “knowledge” in a way that does not depend on whether a certain agent acts upon it.
For the same reason I think it would be natural to say that the sailing ship had knowledge, and that knowledge was lost when it sank. But if we define knowledge in terms of the actions that follow then the sailing ship never had knowledge in the first place.
Now you might say that it was possible that the sailing ship would have survived and acted upon its knowledge of the coastline, but imagine a sailing ship that, unbeknownst to it, is sailing into a storm in which it will certainly be destroyed, and along the way is building an accurate map of the coastline. I would say that the sailing ship is accumulating knowledge and that the knowledge is lost when the sailing ship sinks. But the attempted definition from this post would say that the sailing ship is not accumulating knowledge at all, which seems strange.
It’s of course important to ground out these investigations in practical goals or else we end up in an endless maze of philosophical examples and counter-examples, but I do think this particular concern grounds out in the practical goal of overcoming deception in policies derived from machine learning.
Doesn’t seem to problematic. It was acquiring information. If you are acquiring info, and then you die, then yes, knowledge may (and probably will) be lost.
Well if I learn that my robot vacuum is unexpectedly building a model of human psychology then I’m concerned whether or not it in fact acts on that model, which means that I really want to define “knowledge” in a way that does not depend on whether a certain agent acts upon it.
For the same reason I think it would be natural to say that the sailing ship had knowledge, and that knowledge was lost when it sank. But if we define knowledge in terms of the actions that follow then the sailing ship never had knowledge in the first place.
Now you might say that it was possible that the sailing ship would have survived and acted upon its knowledge of the coastline, but imagine a sailing ship that, unbeknownst to it, is sailing into a storm in which it will certainly be destroyed, and along the way is building an accurate map of the coastline. I would say that the sailing ship is accumulating knowledge and that the knowledge is lost when the sailing ship sinks. But the attempted definition from this post would say that the sailing ship is not accumulating knowledge at all, which seems strange.
It’s of course important to ground out these investigations in practical goals or else we end up in an endless maze of philosophical examples and counter-examples, but I do think this particular concern grounds out in the practical goal of overcoming deception in policies derived from machine learning.