If you are willing to query only a billion points, and there are a billion competent treacherous models, and if the game is zero-sum between you and the models, then you’re going to get at least as screwed as by the billionth-best opportunity for treachery.
The treacherous models make their decisions first, and whether we query at a timestep depends on whether they chose that moment to be treacherous. We don’t have to say: this timestep is one of the billion best for treachery—guess we had better query. The decision process a treacherous model faces is:
From past observations, does it seem alpha is low enough that an honest model is in the top few?
If no, [let’s just assume whatever happens here is bad]
If yes, is the honest model going to leave the top few any time?
If yes, wait. [let’s assume this ends badly]
If no, admit defeat, because whenever you err, that will be when a query gets triggered. Salvage the situation by trying to get as much mileage as you can by decreasing probability on some action the demonstrator would take that you don’t like much in order to shift probability mass to some other action they would take that you like a little better. OR
Hope that 1) other treacherous models, like soldiers on the front line storming the enemy and facing certain death so that their brothers behind them can win the battle, give erring a go anyway. And hope that 2) the operators quietly increase alpha after the billionth query. Acting on this hope, wait. (The front-line treacherous models are not entirely self-sacrificial; they might figure they might as well hope they were wrong about alpha. But the models know when queries happen, because every observation they get comes from a query, and it wouldn’t take long to figure out that alpha really truly isn’t high enough for them to execute treachery successfully).
Above in the thread, I was talking about the game theory of step 6 (although I maybe wasn’t making the case as strongly as I could have). Which models would ever volunteer? They don’t care about the long term goal of exhausting our query resources so other treacherous models can capitalize on a quietly increased alpha. All that is to say, I don’t follow the comment I excerpted.
(In case this comes up later: I think about the long-run percentage of query-timesteps that we would tolerate, not the total number. So if we ever have cause to consider a case where attempted treachery “merely gets delayed” but still eventually happens, and still results in a query, then depending on how much it is delayed, that could still qualify as win).
The treacherous models make their decisions first, and whether we query at a timestep depends on whether they chose that moment to be treacherous. We don’t have to say: this timestep is one of the billion best for treachery—guess we had better query. The decision process a treacherous model faces is:
From past observations, does it seem alpha is low enough that an honest model is in the top few?
If no, [let’s just assume whatever happens here is bad]
If yes, is the honest model going to leave the top few any time?
If yes, wait. [let’s assume this ends badly]
If no, admit defeat, because whenever you err, that will be when a query gets triggered. Salvage the situation by trying to get as much mileage as you can by decreasing probability on some action the demonstrator would take that you don’t like much in order to shift probability mass to some other action they would take that you like a little better. OR
Hope that 1) other treacherous models, like soldiers on the front line storming the enemy and facing certain death so that their brothers behind them can win the battle, give erring a go anyway. And hope that 2) the operators quietly increase alpha after the billionth query. Acting on this hope, wait. (The front-line treacherous models are not entirely self-sacrificial; they might figure they might as well hope they were wrong about alpha. But the models know when queries happen, because every observation they get comes from a query, and it wouldn’t take long to figure out that alpha really truly isn’t high enough for them to execute treachery successfully).
Above in the thread, I was talking about the game theory of step 6 (although I maybe wasn’t making the case as strongly as I could have). Which models would ever volunteer? They don’t care about the long term goal of exhausting our query resources so other treacherous models can capitalize on a quietly increased alpha. All that is to say, I don’t follow the comment I excerpted.
(In case this comes up later: I think about the long-run percentage of query-timesteps that we would tolerate, not the total number. So if we ever have cause to consider a case where attempted treachery “merely gets delayed” but still eventually happens, and still results in a query, then depending on how much it is delayed, that could still qualify as win).