Search for meaning can be part of the activity. I think there is a sensible illustration from the old model of UDT where there’s agent A() and world U(), and we want to look for dependencies D(-) such that D(A) serves as a proxy for U(), and also such that D has A factored out of it, so that it itself doesn’t depend on A (not a spurious dependence) to prevent cyclic reasoning when A makes decisions based on D. Here, we start with A and U as given, and then figure out D, which serves as the correspondence meaning of A in terms of its acausal influence on U. So the meaning of A is logically downstream of the definition of A.
When we label buttons with “2+2=5” and “2+2=7″, the physical world outcomes of pressing them are not on the way to the U() of their A(), so they are not relevant. But those outcomes are on the way to the human’s U(), even as the human still doesn’t know the meaning of their actions, since that meaning is downstream of knowing the scope of the semantic outcomes they do already know to care about. This difference in scopes of intended outcomes is the disanalogy.
Search for meaning can be part of the activity. I think there is a sensible illustration from the old model of UDT where there’s agent A() and world U(), and we want to look for dependencies D(-) such that D(A) serves as a proxy for U(), and also such that D has A factored out of it, so that it itself doesn’t depend on A (not a spurious dependence) to prevent cyclic reasoning when A makes decisions based on D. Here, we start with A and U as given, and then figure out D, which serves as the correspondence meaning of A in terms of its acausal influence on U. So the meaning of A is logically downstream of the definition of A.
When we label buttons with “2+2=5” and “2+2=7″, the physical world outcomes of pressing them are not on the way to the U() of their A(), so they are not relevant. But those outcomes are on the way to the human’s U(), even as the human still doesn’t know the meaning of their actions, since that meaning is downstream of knowing the scope of the semantic outcomes they do already know to care about. This difference in scopes of intended outcomes is the disanalogy.