I think this gestures at a natural idea, but I don’t know how to make it work. The similar thing I’ve been musing about is the way in which agent’s policy/will acausally influences all the facts, including observations from experiments, calculations, inferences. So everything varies with will. Some facts and maps between facts are constant, but others can change with will. This “variation with will” semantics is different for different agents (so any given fact is interpreted differently by different agents), and when one agent tries to figure out how another agent reasons, we can consider variation with wills of both agents.
(I’m using a different term instead of “policy” because there are issues with policies in counterfactuals. If a policy is itself seen as an algorithm, it might work differently depending on a counterfactual, in which case an agent in some counterfactual will fail to follow the policy that determines the counterfactual. But its will is to follow that policy nonetheless. In another way, this follows the distinction between abstract algorithms and their instances in counterfactuals. Or in CDT terms, between the problem statement for a surgery and its outcome.)
A vaguely similar thing in math is how reasoning in a topos works. You specify a site of variation (in this case something that works as a “site of will”) and then you have a sheaf topos that interprets at least intuitionistic logic, so most things make sense there and in any other topos, interpreted as varying with will. This suggests that the statements from an internal language that can be interpreted in any topos of variation with will are objective, being constant (not varying with will) or having the same interpretation for different agents (which doesn’t make sense from this point of view) is not necessary for objectivity. On the other hand, there are facts about a particular topos that don’t work elsewhere, these are “subjective”, only naturally make sense for a particular agent. Also constant maps between non-constant elements might be related to the idea of a “dependence” of facts, perhaps causality (a dependence between facts that isn’t destroyed by interventions, or doesn’t vary with will, even though the facts do), see this MO answer for a construction that might admit this interpretation.
I know of no technical arguments in this space that get to anything that’s not already presupposed when trying to fit this framing, and these vaguely related things from math don’t clarify anything either (I don’t know how to actually model agents this way). There might be something here, but I don’t have any specific idea on how to make progress.
I think this gestures at a natural idea, but I don’t know how to make it work. The similar thing I’ve been musing about is the way in which agent’s policy/will acausally influences all the facts, including observations from experiments, calculations, inferences. So everything varies with will. Some facts and maps between facts are constant, but others can change with will. This “variation with will” semantics is different for different agents (so any given fact is interpreted differently by different agents), and when one agent tries to figure out how another agent reasons, we can consider variation with wills of both agents.
(I’m using a different term instead of “policy” because there are issues with policies in counterfactuals. If a policy is itself seen as an algorithm, it might work differently depending on a counterfactual, in which case an agent in some counterfactual will fail to follow the policy that determines the counterfactual. But its will is to follow that policy nonetheless. In another way, this follows the distinction between abstract algorithms and their instances in counterfactuals. Or in CDT terms, between the problem statement for a surgery and its outcome.)
A vaguely similar thing in math is how reasoning in a topos works. You specify a site of variation (in this case something that works as a “site of will”) and then you have a sheaf topos that interprets at least intuitionistic logic, so most things make sense there and in any other topos, interpreted as varying with will. This suggests that the statements from an internal language that can be interpreted in any topos of variation with will are objective, being constant (not varying with will) or having the same interpretation for different agents (which doesn’t make sense from this point of view) is not necessary for objectivity. On the other hand, there are facts about a particular topos that don’t work elsewhere, these are “subjective”, only naturally make sense for a particular agent. Also constant maps between non-constant elements might be related to the idea of a “dependence” of facts, perhaps causality (a dependence between facts that isn’t destroyed by interventions, or doesn’t vary with will, even though the facts do), see this MO answer for a construction that might admit this interpretation.
I know of no technical arguments in this space that get to anything that’s not already presupposed when trying to fit this framing, and these vaguely related things from math don’t clarify anything either (I don’t know how to actually model agents this way). There might be something here, but I don’t have any specific idea on how to make progress.