I think there’s something big left out of this post, which is accounting for the agent observing and judging the causal relationships. Something has to decide how to carve up the world into parts and calculate counterfactuals. It’s something that exists implicitly in your approach to causality but you don’t address it here, which I think is unfortunate because although humans generally have the same frame of reference for judging causality, alien minds, like AI, may not.
The way I think about this, is that the variables constitute a reference frame. They define particular well-defined measurements that can be done, which all observers would agree about. In order to talk about interventions, there must also be a well-defined “set” operation associated with each variable, so that the effect of interventions is well-defined.
Once we have the variables, and a “set” and “get” operation for each (i.e. intervene and observe operations), then causality is an objective property of the universe. Regardless who does the experiment (i.e. sets a few variables) and does the measurement (i.e. observes some variables), the outcome will follow the same distribution.
So in short, I don’t think we need to talk about an agent observer beyond what we already say about the variables.
Yes, the variables constitute a reference frame, which is to say an ultimately subjective way of viewing the world. Even if there is high inter-observer agreement about the shape of the reference frame, it’s not guaranteed unless you also posit something like Wentworth’s natural abstraction hypothesis to be true.
Perhaps a toy example will help explain my point. Suppose the grass should only be watered when there’s a violet cube on the lawn. To automate this a sensor is attached to the sprinklers that turns them on only when the sensor sees a violet cube. I place a violet cube on the lawn to make sure the lawn is watered. I return a week later and find the grass is dead.
What happened? The cube was actually painted with a fine mix of red and blue paint. My eyes interpreted purple as violet, but which the sensor did not.
Conversely, if it was my job to turn on the sprinklers rather than the sensor, I would have been fooled by the purple cube into turning them on.
It’s perhaps tempting to say this doesn’t count because I’m now part of the system, but that’s also kind of the point. I, an observer of this system trying to understand its causality, am also embedded within the system (even if I think I can isolate it for demonstration purposes, I can’t do this in reality, especially when AI are involved and will reward hack by doing things that were supposed to be “outside” the system). So my subjective experience not only matters to how causality is reckoned, but also how the physical reality being mapped by causality plays out.
Sure, I think we’re saying the same thing: causality is frame dependent, and the variables define the frame (in your example, you and the sensor have different measurement procedures for detecting the purple cube, so you don’t actually talk about the same random variable).
How big a problem is it? In practice it seems usually fine, if we’re careful to test our sensor / double check we’re using language in the same way. In theory, scaled up to super intelligence, it’s not impossible it would be a problem.
But I would also like to emphasize that the problem you’re pointing to isn’t restricted to causality, it goes for all kinds of linguistic reference. So to the extent we like to talk about AI systems doing things at all, causality is no worse than natural language, or other formal languages.
I think people sometimes hold it to a higher bar than natural language, because it feels like a formal language could somehow naturally intersect with a programmed AI. But of course causality doesn’t solve the reference problem in general. Partly for this reason, we’re mostly using causality as a descriptive language to talk clearly and precisely (relative to human terms) about AI systems and their properties.
Fair. For what it’s worth I strongly agree that causality is just one domain where this problem becomes apparent, and we should be worried about it generally for super intelligent agents, much more so than I think many folks seem (in my estimation) to worry about it today.
I think there’s something big left out of this post, which is accounting for the agent observing and judging the causal relationships. Something has to decide how to carve up the world into parts and calculate counterfactuals. It’s something that exists implicitly in your approach to causality but you don’t address it here, which I think is unfortunate because although humans generally have the same frame of reference for judging causality, alien minds, like AI, may not.
The way I think about this, is that the variables constitute a reference frame. They define particular well-defined measurements that can be done, which all observers would agree about. In order to talk about interventions, there must also be a well-defined “set” operation associated with each variable, so that the effect of interventions is well-defined.
Once we have the variables, and a “set” and “get” operation for each (i.e. intervene and observe operations), then causality is an objective property of the universe. Regardless who does the experiment (i.e. sets a few variables) and does the measurement (i.e. observes some variables), the outcome will follow the same distribution.
So in short, I don’t think we need to talk about an agent observer beyond what we already say about the variables.
Yes, the variables constitute a reference frame, which is to say an ultimately subjective way of viewing the world. Even if there is high inter-observer agreement about the shape of the reference frame, it’s not guaranteed unless you also posit something like Wentworth’s natural abstraction hypothesis to be true.
Perhaps a toy example will help explain my point. Suppose the grass should only be watered when there’s a violet cube on the lawn. To automate this a sensor is attached to the sprinklers that turns them on only when the sensor sees a violet cube. I place a violet cube on the lawn to make sure the lawn is watered. I return a week later and find the grass is dead.
What happened? The cube was actually painted with a fine mix of red and blue paint. My eyes interpreted purple as violet, but which the sensor did not.
Conversely, if it was my job to turn on the sprinklers rather than the sensor, I would have been fooled by the purple cube into turning them on.
It’s perhaps tempting to say this doesn’t count because I’m now part of the system, but that’s also kind of the point. I, an observer of this system trying to understand its causality, am also embedded within the system (even if I think I can isolate it for demonstration purposes, I can’t do this in reality, especially when AI are involved and will reward hack by doing things that were supposed to be “outside” the system). So my subjective experience not only matters to how causality is reckoned, but also how the physical reality being mapped by causality plays out.
Sure, I think we’re saying the same thing: causality is frame dependent, and the variables define the frame (in your example, you and the sensor have different measurement procedures for detecting the purple cube, so you don’t actually talk about the same random variable).
How big a problem is it? In practice it seems usually fine, if we’re careful to test our sensor / double check we’re using language in the same way. In theory, scaled up to super intelligence, it’s not impossible it would be a problem.
But I would also like to emphasize that the problem you’re pointing to isn’t restricted to causality, it goes for all kinds of linguistic reference. So to the extent we like to talk about AI systems doing things at all, causality is no worse than natural language, or other formal languages.
I think people sometimes hold it to a higher bar than natural language, because it feels like a formal language could somehow naturally intersect with a programmed AI. But of course causality doesn’t solve the reference problem in general. Partly for this reason, we’re mostly using causality as a descriptive language to talk clearly and precisely (relative to human terms) about AI systems and their properties.
Fair. For what it’s worth I strongly agree that causality is just one domain where this problem becomes apparent, and we should be worried about it generally for super intelligent agents, much more so than I think many folks seem (in my estimation) to worry about it today.