“You’re deciding the logical fact that the program-that-is-you makes a certain output.”
There is no need to focus on these concepts. That the fact of decision is “logical” doesn’t usefully characterize it: if we talk about the “physical” fact of making a decision, then everything else remains the same, you’d just need to see what this physical event implies about decisions made by your near-copies elsewhere (among the normal consequences). Likewise, pointing to a physical event doesn’t require conceptualizing a “program” or even an “agent” that computes the state of this event, you could just specify coordinates in spacetime and work on figuring out what’s there (roughly speaking).
(It’s of course convenient to work with abstractly defined structures, in particular decisions generated by programs (rather than abstractly defined in a more general way), and at least with mathematical structuralism in mind working with abstract structures looks like the right way of describing things in general.)
But how does one identify/encode a physical fact? With a logical fact you can say “Program with source code X outputs Y” and then deduce consequences from that. I don’t see what the equivalent is with a “physical” notion of decision. Is the agent supposed to have hard-coded knowledge of the laws of physics and its spacetime coordinates (which would take the place of knowledge of its own source code) and then represent a decision as “the object at coordinate X in the universe with laws Y and initial conditions Z does A”? That seems like a much less elegant and practical solution to me. And you’re still using it as a logical fact, i.e., deducing logical consequences from it, right?
I feel like you must be making a point that I’m not getting...
The same way you find a way home. How does that work? Presumably only if we assume the context of a particular collection of physical worlds (perhaps with selected preferred approximate location). Given that we’re considering only some worlds, additional information that an agent has allows it to find a location within these worlds, without knowing the definition of those worlds.
This I think is an important point, and it comes up frequently for various reasons: to usefully act or reason, an agent doesn’t have to “personally” understand what’s going on, there may be “external” assumptions that enable an agent to act within them without having access to them.
(I’m probably rehashing what is already obvious to all readers, or missing something, but:)
That seems like a much less elegant and practical solution to me.
’Course, which Nesov acknowledged with his closing sentence, but even so it’s conceivable, which indicates that the focus on logical facts isn’t a necessary distinction between old and new decision theories. And Nesov’s claim was only there there is no need to focus on logical-ness as such to explain the distinction.
And you’re still using it as a logical fact, i.e., deducing logical consequences from it, right?
Does comprehensive physical knowledge look any different from decompressed logical knowledge? Logical facts seem to be facts about what remains true no matter where you are, but if you know everything about where you are already then the logical aspect of your knowledge doesn’t need to be acknowledged or represented. More concretely, if you have a detailed physical model of your selves, i.e. all instantiations of the program that is you, across all possible quantum branches, and you know that all of them like to eat cake, then there’s no additional information hidden in the logical fact “program X, i.e. me, likes to eat cake”. You can represent the knowledge either way, at least theoretically, which I think is Nesov’s point, maybe?
But this only seems true about physically instantiated agents reasoning about decisions from a first person perspective so to speak, so I’m confused; there doesn’t seem to be a corresponding “physical” way to model how purely mathematical objects can ambiently control other purely mathematical objects. Is it assumed that such mathematical objects can only have causal influence by way of their showing up in a physically instantiated decision calculus somewhere (at least for our purposes)? Or is the ability of new decision theories to reason about purely mathematical objects considered relatively tangential to the decision theories’ defining features (even if it is a real advantage)?
There is no need to focus on these concepts. That the fact of decision is “logical” doesn’t usefully characterize it: if we talk about the “physical” fact of making a decision, then everything else remains the same, you’d just need to see what this physical event implies about decisions made by your near-copies elsewhere (among the normal consequences). Likewise, pointing to a physical event doesn’t require conceptualizing a “program” or even an “agent” that computes the state of this event, you could just specify coordinates in spacetime and work on figuring out what’s there (roughly speaking).
(It’s of course convenient to work with abstractly defined structures, in particular decisions generated by programs (rather than abstractly defined in a more general way), and at least with mathematical structuralism in mind working with abstract structures looks like the right way of describing things in general.)
But how does one identify/encode a physical fact? With a logical fact you can say “Program with source code X outputs Y” and then deduce consequences from that. I don’t see what the equivalent is with a “physical” notion of decision. Is the agent supposed to have hard-coded knowledge of the laws of physics and its spacetime coordinates (which would take the place of knowledge of its own source code) and then represent a decision as “the object at coordinate X in the universe with laws Y and initial conditions Z does A”? That seems like a much less elegant and practical solution to me. And you’re still using it as a logical fact, i.e., deducing logical consequences from it, right?
I feel like you must be making a point that I’m not getting...
The same way you find a way home. How does that work? Presumably only if we assume the context of a particular collection of physical worlds (perhaps with selected preferred approximate location). Given that we’re considering only some worlds, additional information that an agent has allows it to find a location within these worlds, without knowing the definition of those worlds.
This I think is an important point, and it comes up frequently for various reasons: to usefully act or reason, an agent doesn’t have to “personally” understand what’s going on, there may be “external” assumptions that enable an agent to act within them without having access to them.
(I’m probably rehashing what is already obvious to all readers, or missing something, but:)
’Course, which Nesov acknowledged with his closing sentence, but even so it’s conceivable, which indicates that the focus on logical facts isn’t a necessary distinction between old and new decision theories. And Nesov’s claim was only there there is no need to focus on logical-ness as such to explain the distinction.
Does comprehensive physical knowledge look any different from decompressed logical knowledge? Logical facts seem to be facts about what remains true no matter where you are, but if you know everything about where you are already then the logical aspect of your knowledge doesn’t need to be acknowledged or represented. More concretely, if you have a detailed physical model of your selves, i.e. all instantiations of the program that is you, across all possible quantum branches, and you know that all of them like to eat cake, then there’s no additional information hidden in the logical fact “program X, i.e. me, likes to eat cake”. You can represent the knowledge either way, at least theoretically, which I think is Nesov’s point, maybe?
But this only seems true about physically instantiated agents reasoning about decisions from a first person perspective so to speak, so I’m confused; there doesn’t seem to be a corresponding “physical” way to model how purely mathematical objects can ambiently control other purely mathematical objects. Is it assumed that such mathematical objects can only have causal influence by way of their showing up in a physically instantiated decision calculus somewhere (at least for our purposes)? Or is the ability of new decision theories to reason about purely mathematical objects considered relatively tangential to the decision theories’ defining features (even if it is a real advantage)?