Even if truth judgments can only be made by comparing maps — even if we can never assess the territory directly — there is still a question of how the territory is.
Furthermore, there is value in distinguishing our model/expectations of the world, from our experiences within it.
For an AI that had no need to communicate with other agents, then the idea of truth serves as a succinct term for the map-territory/belief-reality correspondence.
It allows the AI to be more economical/efficient in how it stores information about its maps.
That’s some value.
Saying that a proposition is true, is saying that it’s an accurate description of the territory.
Tarski’s Litany:
“The sentence ‘X’ is true iff X.”
The territory may be physical reality (“‘the sky is blue’ is true”), a formal system (“‘2 + 2 = 4’ is true”), other maps, etc.
Response to the First Meditation
Even if truth judgments can only be made by comparing maps — even if we can never assess the territory directly — there is still a question of how the territory is.
Furthermore, there is value in distinguishing our model/expectations of the world, from our experiences within it.
This leads to two naive notions of truth:
Accurate descriptions of the territory are true.
Expectations that match experience are true.
Response to the Second Meditation
For an AI that had no need to communicate with other agents, then the idea of truth serves as a succinct term for the map-territory/belief-reality correspondence.
It allows the AI to be more economical/efficient in how it stores information about its maps.
That’s some value.
Saying that a proposition is true, is saying that it’s an accurate description of the territory.
Tarski’s Litany: “The sentence ‘X’ is true iff X.”
The territory may be physical reality (“‘the sky is blue’ is true”), a formal system (“‘2 + 2 = 4’ is true”), other maps, etc.