Overall, I think the issue of causal confusion and OOD misgeneralisation is much more about capabilities than about alignment, especially if we are talking about the long-term x-risk from superintelligent AI, rather than short/mid-term AI risk.
The main argument of the post isn’t “ASI/AGI may be causally confused, what are the consequences of that” but rather “Scaling up static pretraining may result in causally confused models, which hence probably wouldn’t be considered ASI/AGI”. I think in practice if we get AGI/ASI, then almost by definition I’d think it’s not causally confused.
OOD misgeneralisation is absolutely inevitable, due to Gödel’s incompleteness of the universe and the fact that all the systems that evolve on Earth generally climb up in complexity
In a theoretical sense this may be true (I’m not really familiar with the argument), but in practice OOD misgeneralisation is probably a spectrum, and models can be more or less causally confused about how the world works. We’re arguing here that static training, even when scaled up, plausibly doesn’t lead to a model that isn’t causally confused about a lot of how the world works.
Did you use the term “objective misgeneralisation” rather than “goal misgeneralisation” on purpose? “Objective” and “goal” are synonyms, but “objective misgeneralisation” is hardly used, “goal misgeneralisation” is the standard term.
Maybe I miss something obvious, but this argument looks wrong to me, or it assumes that the learning algorithm is not allowed to discover additional (conceptual, abstract, hidden, implicit) variables in the training data, but this is false for deep neural networks
Given that the model is trained statically, while it could hypothesise about additional variables of the kinds your listed, it can never know which variables or which values for those variables are correct without domain labels or interventional data. Specifically while “Discovering such hidden confounders doesn’t give interventional capacity” is true, to discover these confounders he needed interventional capacity.
I don’t understand the italicised part of this sentence. Why will P(shorts, ice cream) be a reliable guide to decision-making?
We’re not saying that P(shorts, icecream) is good for decision making, but P(shorts, do(icecream)) is useful in sofar as the goal is to make someone where shorts, and providing icecream is one of the possible actions (as the causal model will demonstrate that providing icecream isn’t useful for making someone where shorts).
What do these symbols in parens before the claims mean?
They are meant to be referring to the previous parts of the argument, but I’ve just realised that this hasn’t worked as the labels aren’t correct. I’ll fix that.
The main argument of the post isn’t “ASI/AGI may be causally confused, what are the consequences of that” but rather “Scaling up static pretraining may result in causally confused models, which hence probably wouldn’t be considered ASI/AGI”. I think in practice if we get AGI/ASI, then almost by definition I’d think it’s not causally confused.
In a theoretical sense this may be true (I’m not really familiar with the argument), but in practice OOD misgeneralisation is probably a spectrum, and models can be more or less causally confused about how the world works. We’re arguing here that static training, even when scaled up, plausibly doesn’t lead to a model that isn’t causally confused about a lot of how the world works.
No reason, I’ll edit the post to use goal misgeneralisation. Goal misgeneralisation is the standard term but hasn’t been so for very long (see e.g. this tweet: https://twitter.com/DavidSKrueger/status/1540303276800983041).
Given that the model is trained statically, while it could hypothesise about additional variables of the kinds your listed, it can never know which variables or which values for those variables are correct without domain labels or interventional data. Specifically while “Discovering such hidden confounders doesn’t give interventional capacity” is true, to discover these confounders he needed interventional capacity.
We’re not saying that P(shorts, icecream) is good for decision making, but P(shorts, do(icecream)) is useful in sofar as the goal is to make someone where shorts, and providing icecream is one of the possible actions (as the causal model will demonstrate that providing icecream isn’t useful for making someone where shorts).
They are meant to be referring to the previous parts of the argument, but I’ve just realised that this hasn’t worked as the labels aren’t correct. I’ll fix that.