The knob on the current supply is a part of the territory, but “being able to adjust the knob” being an affordance of the territory, while “setting the knob so that it outputs a desired voltage” isn’t (or at least, is a less central example) is part of our map.
On the one hand, yeah, that seems like the obvious objection, and it’s a solid one. On the other hand… it does seem like “setting the knob so that it outputs a desired voltage” requires either a very unusual coincidence or a very artificial setup, and I’m not entirely convinced that the difference is purely a matter of perspective. It feels—at least to me—like there’s something unusual going on there, in a sense which is not specific to human aesthetics.
I’m gonna think out loud here for a moment...
The big red flag is that “setting the knob so that it outputs a desired voltage” implies either one hell of a coincidence, or some kind of control system. And control systems are one of the things which I know, from other examples, can make a system with one causal structure behave-as-though it follows some other causal structure. (See e.g. the thermostat example in my response to Ruby’s comment.) In fact, so far every example I’ve seen where the abstract causal structure doesn’t match the causal structure of the underlying system involves either feedback control or some kind of embedded model of the system (and note that, by the good regulator theorem, feedback control implies some kind of embedded model).
This is obviously very speculative, but… I suspect that we could say something like “any set of counterfactual queries which flip the direction of this arrow need to involve an embedded model of the system (probably embedded in a controller)”.
I’m not totally sure I’m objecting to anything. For something that thinks about and interacts with the world more or less like a human, I agree that turning a knob is probably an objectively better affordance than e.g. selecting the location of each atom individually.
You could even phrase this as an objective fact: “for agents in some class that includes humans, there are certain guidelines for constructing causal models that, if obeyed, lead to them being better predictors than if not.” This would be a fact about the territory. And it would tell you that if you were like a human, and wanted to predict the effect of your actions, there would be some rules your map would follow.
And then if your map did follow those rules, that would be a fact about your map.
I think there’s a way to drop the “for humans/agents like humans” part. Like, we could drop our circuit in the middle of the woods, and sometimes random animals would accidentally turn the knob on the supply, or temperature changes would adjust the resistance in the resistor. “Which counterfactuals actually happen sometimes” doesn’t really seem like the right criterion to use as fundamental here, but it does suggest that there’s something more universal in play.
I think another related qualitative intuition is constructive vs. nonconstructive. “Just turn the knob” is simple and obvious enough to you to be regarded as constructive, not leaving any parts unspecified for a planner to compute. “Just set the voltage to 10V” seems nonconstructive—like it would require further abstract thought to make a plan to make the voltage be 10V. But as we’ve learned, turning knobs is a fairly tricky robotics task, requiring plenty of thought—just thought that’s unconscious in humans.
On the one hand, yeah, that seems like the obvious objection, and it’s a solid one. On the other hand… it does seem like “setting the knob so that it outputs a desired voltage” requires either a very unusual coincidence or a very artificial setup, and I’m not entirely convinced that the difference is purely a matter of perspective. It feels—at least to me—like there’s something unusual going on there, in a sense which is not specific to human aesthetics.
I’m gonna think out loud here for a moment...
The big red flag is that “setting the knob so that it outputs a desired voltage” implies either one hell of a coincidence, or some kind of control system. And control systems are one of the things which I know, from other examples, can make a system with one causal structure behave-as-though it follows some other causal structure. (See e.g. the thermostat example in my response to Ruby’s comment.) In fact, so far every example I’ve seen where the abstract causal structure doesn’t match the causal structure of the underlying system involves either feedback control or some kind of embedded model of the system (and note that, by the good regulator theorem, feedback control implies some kind of embedded model).
This is obviously very speculative, but… I suspect that we could say something like “any set of counterfactual queries which flip the direction of this arrow need to involve an embedded model of the system (probably embedded in a controller)”.
I’m not totally sure I’m objecting to anything. For something that thinks about and interacts with the world more or less like a human, I agree that turning a knob is probably an objectively better affordance than e.g. selecting the location of each atom individually.
You could even phrase this as an objective fact: “for agents in some class that includes humans, there are certain guidelines for constructing causal models that, if obeyed, lead to them being better predictors than if not.” This would be a fact about the territory. And it would tell you that if you were like a human, and wanted to predict the effect of your actions, there would be some rules your map would follow.
And then if your map did follow those rules, that would be a fact about your map.
I think there’s a way to drop the “for humans/agents like humans” part. Like, we could drop our circuit in the middle of the woods, and sometimes random animals would accidentally turn the knob on the supply, or temperature changes would adjust the resistance in the resistor. “Which counterfactuals actually happen sometimes” doesn’t really seem like the right criterion to use as fundamental here, but it does suggest that there’s something more universal in play.
I think another related qualitative intuition is constructive vs. nonconstructive. “Just turn the knob” is simple and obvious enough to you to be regarded as constructive, not leaving any parts unspecified for a planner to compute. “Just set the voltage to 10V” seems nonconstructive—like it would require further abstract thought to make a plan to make the voltage be 10V. But as we’ve learned, turning knobs is a fairly tricky robotics task, requiring plenty of thought—just thought that’s unconscious in humans.