You can only get from the premise “we can only know our own maps” to the conclusion “we can only care about our own maps” via the minor premise “you can only care about what you fully understand ”. That premise is clearly wrong: one can care about unknown reality, just as one can care about the result of a football match that hasn’t happened yet. A lot of people do care about reality directionally.
I think I need an operational definition of “care about” to process this. Presumably, you can care about anything you can imagine, whether you perceive it or not, whether it exists or not, whether it corresponds to other maps or not. Caring about something does not make it territory. It’s just another map.
Embedded agents are in the territory.
Kind of. Identification of agency is map, not territory. Processing within an agent happens (presumably) in a territory, but the higher-level modeling and output of that processing is purely about maps. The agent is a subset of the territory, but doesn’t have access at the agent level to the territory.
I think I need an operational definition of “care about” to process this
If you define “care about” as “put resources into trying to achieve” , there’s plenty of evidence that people care about things that can’t fully define, and don’t fully understand, not least the truth-seeking that happens here.
You can only get from the premise “we can only know our own maps” to the conclusion “we can only care about our own maps” via the minor premise “you can only care about what you fully understand ”. That premise is clearly wrong: one can care about unknown reality, just as one can care about the result of a football match that hasn’t happened yet. A lot of people do care about reality directionally.
@Dagon
Embedded agents are in the territory. How helpful that is depends on the territory
@Noosphere89
Well,no. A perfect map is still a map. The map territory distinction dies not lie in imperfect representation alone.
I think I need an operational definition of “care about” to process this. Presumably, you can care about anything you can imagine, whether you perceive it or not, whether it exists or not, whether it corresponds to other maps or not. Caring about something does not make it territory. It’s just another map.
Kind of. Identification of agency is map, not territory. Processing within an agent happens (presumably) in a territory, but the higher-level modeling and output of that processing is purely about maps. The agent is a subset of the territory, but doesn’t have access at the agent level to the territory.
If you define “care about” as “put resources into trying to achieve” , there’s plenty of evidence that people care about things that can’t fully define, and don’t fully understand, not least the truth-seeking that happens here.