It goes under many names, such as transfer learning, robustness to distributional shift / data shift, and out-of-distribution generalization. Each one has (to me) slightly different connotations, e.g. transfer learning suggests that the researcher has a clear idea of the distinction between the first and second setting (and so you “transfer” from the first to the second), whereas if in RL you change which part of the state space you’re in as you act, I would be more likely to call that distributional shift rather than transfer learning.
It goes under many names, such as transfer learning, robustness to distributional shift / data shift, and out-of-distribution generalization. Each one has (to me) slightly different connotations, e.g. transfer learning suggests that the researcher has a clear idea of the distinction between the first and second setting (and so you “transfer” from the first to the second), whereas if in RL you change which part of the state space you’re in as you act, I would be more likely to call that distributional shift rather than transfer learning.