Suggestion for content 2: relationship to invariant causal prediction
Lots of people in ML these days seem excited about getting out of distribution generalization with techniques like invariant causal prediction. See e.g. this, this, section 5.2 here and related background. This literature seems promising but in discussions about inner alignment it’s missing. It seems useful to discuss how far it can go in helping solve inner alignment.
Suggestion for content 2: relationship to invariant causal prediction
Lots of people in ML these days seem excited about getting out of distribution generalization with techniques like invariant causal prediction. See e.g. this, this, section 5.2 here and related background. This literature seems promising but in discussions about inner alignment it’s missing. It seems useful to discuss how far it can go in helping solve inner alignment.