From my perspective, it is entirely possible to have an alignment failure that works like this and occurs at difficulty level 2. This is still an ‘easier’ world than the higher levels because you can get killed in a much swifter and earlier way with far less warning in those worlds.
The reason I wouldn’t put it at level 8 is because presumably the models are following a reasonable proxy for what we want if it generalizes well beyond human level, but this proxy is inadequate in some ways that become apparent later on. The level 8 says not that any misgeneralization occurs but that rapid, unpredictable misgeneralization occurs around the human level such that alignment techniques quickly break down.
In the scenario you describe, there’d be an opportunity to notice what’s going on (after all you’d have superhuman AI that more or less does what it’s told to help you predict future consequences of even more superhuman AI) and the failure occurs much later.
I agree that this is a real possibility and in the table I did say at level 2,
From my perspective, it is entirely possible to have an alignment failure that works like this and occurs at difficulty level 2. This is still an ‘easier’ world than the higher levels because you can get killed in a much swifter and earlier way with far less warning in those worlds.
The reason I wouldn’t put it at level 8 is because presumably the models are following a reasonable proxy for what we want if it generalizes well beyond human level, but this proxy is inadequate in some ways that become apparent later on. The level 8 says not that any misgeneralization occurs but that rapid, unpredictable misgeneralization occurs around the human level such that alignment techniques quickly break down.
In the scenario you describe, there’d be an opportunity to notice what’s going on (after all you’d have superhuman AI that more or less does what it’s told to help you predict future consequences of even more superhuman AI) and the failure occurs much later.