I agree that AIs only optimizing for good human ratings on the episode (what I call “reward-on-the-episode seekers”) have incentives to seize control of the reward process, that this is indeed dangerous, and that in some cases it will incentivize AIs to fake alignment in an effort to seize control of the reward process on the episode (I discuss this in the section on “non-schemers with schemer-like traits”). However, I also think that reward-on-the-episode seekers are also substantially less scary than schemers in my sense, for reasons I discuss here (i.e., reasons to do with what I call “responsiveness to honest tests,” the ambition and temporal scope of their goals, and their propensity to engage in various forms of sandbagging and what I call “early undermining”). And this especially for reward-on-the-episode seekers with fairly short episodes, where grabbing control over the reward process may not be feasible on the relevant timescales.
I agree that AIs only optimizing for good human ratings on the episode (what I call “reward-on-the-episode seekers”) have incentives to seize control of the reward process, that this is indeed dangerous, and that in some cases it will incentivize AIs to fake alignment in an effort to seize control of the reward process on the episode (I discuss this in the section on “non-schemers with schemer-like traits”). However, I also think that reward-on-the-episode seekers are also substantially less scary than schemers in my sense, for reasons I discuss here (i.e., reasons to do with what I call “responsiveness to honest tests,” the ambition and temporal scope of their goals, and their propensity to engage in various forms of sandbagging and what I call “early undermining”). And this especially for reward-on-the-episode seekers with fairly short episodes, where grabbing control over the reward process may not be feasible on the relevant timescales.