FWIW, I agree that if powerful AI is achieved via pure pre-training, then deceptive alignment is less likely, but this “the prediction goal is simple” argument seems very wrong to me. We care about the simplicity of the goal in terms of the world model (which will surely be heavily shaped by the importance of various predictions) and I don’t see any reason why things like close proxies of reward in RL training wouldn’t just as simple for those models.
Interpreted naively it seems like this goal simplicity argument implies that it matters a huge amount how simple your data collection routine is. (Simple to who?). For instance, this argument implies that collecting data from a process such as “all outlinks from reddit with >3 upvotes” makes deceptive alignment considerably less likely than a process like “do whatever messy thing AI labs do now”. This seems really, really implausible: surely AIs won’t be doing much explicit reasoning about these details of the process because this will clearly be effectively hardcoded in a massive number of places.
Evan and I have talked about these arguments at some point.
(I need to get around to writing a review of conditioning predictive models which makes these counterarguments.)
FWIW, I agree that if powerful AI is achieved via pure pre-training, then deceptive alignment is less likely, but this “the prediction goal is simple” argument seems very wrong to me. We care about the simplicity of the goal in terms of the world model (which will surely be heavily shaped by the importance of various predictions) and I don’t see any reason why things like close proxies of reward in RL training wouldn’t just as simple for those models.
Interpreted naively it seems like this goal simplicity argument implies that it matters a huge amount how simple your data collection routine is. (Simple to who?). For instance, this argument implies that collecting data from a process such as “all outlinks from reddit with >3 upvotes” makes deceptive alignment considerably less likely than a process like “do whatever messy thing AI labs do now”. This seems really, really implausible: surely AIs won’t be doing much explicit reasoning about these details of the process because this will clearly be effectively hardcoded in a massive number of places.
Evan and I have talked about these arguments at some point.
(I need to get around to writing a review of conditioning predictive models which makes these counterarguments.)