If nothing else, it seems like you could literally add garbage bits to the treacherous models
Okay, sure.
It seems to me like this is clearly wrong in the limit (since simple consequentialists would take over simple physics).
It’s not clear to me that there isn’t meaningful overhead involved.
You are saying that a special moment is a particularly great one to be treacherous. But if P(discovery) is 99.99% during that period, and there is any other treachery-possible period where P(discovery) is small, then that other period would have been better after all. Right?
I agree with what you’re saying but I don’t see how it contradicts what I was. First, what I had in mind when saying that some timesteps are better for treachery because when the agent acts on a false prediction it has a greater effect on the world, though of course P(discovery) is also relevant. But my point is that when multiple treacherous models pick the same timestep to err, there may be pros and cons to doing this, but one thing that isn’t on the cons list, is that in the long run, it makes our lives easier if they do. Making our lives difficult is a public good for treacherous models.
So treacherous models won’t be trying to avoid collisions in order to make queries be linear in p(treachery)/p(truth). If P(discovery) increases when multiple models are trying to be treacherous at the same time—which we could go onto discuss; it’s not obvious to me either way as of yet—that will balanced against the inherit variation in some timesteps being a better for treachery than others.
So treacherous models won’t be trying to avoid collisions in order to make queries be linear in p(treachery)/p(truth). If P(discovery) increases when multiple models are trying to be treacherous at the same time—which we could go onto discuss; it’s not obvious to me either way as of yet—that will balanced against the inherit variation in some timesteps being a better for treachery than others.
If you are willing to query only a billion points, and there are a billion competent treacherous models, and if the game is zero-sum between you and the models, then you’re going to get at least as screwed as by the billionth-best opportunity for treachery. Otherwise, one of the treacherous models made a mistake: apparently there was a top-billion treachery opportunities where you didn’t query, so they should have been able to get at least that much value (and you should have lost at least that much value).
That’s not quite right since your querying behavior will be stochastic (at least from the model’s epistemic position). But you get the same conclusion to within a constant factor: if there are 2 billion treacherous models, then out of the top 2 billion opportunities for treachery, there must be some where our subjective probability of querying is at most 1⁄2, and if the models are competent they will get at least as much value as they could by defecting at one of those times, i.e. at least 1⁄2 of the harms from the 2-billionth-best opportunity for treachery. So we will lose at least that much value.
(Those arguments aren’t totally formal, we didn’t even really discharge the “at least N treacherous models assumption” which you need to do in order to show that your decision about when to query isn’t so influenced by any given model.)
If you are willing to query only a billion points, and there are a billion competent treacherous models, and if the game is zero-sum between you and the models, then you’re going to get at least as screwed as by the billionth-best opportunity for treachery.
The treacherous models make their decisions first, and whether we query at a timestep depends on whether they chose that moment to be treacherous. We don’t have to say: this timestep is one of the billion best for treachery—guess we had better query. The decision process a treacherous model faces is:
From past observations, does it seem alpha is low enough that an honest model is in the top few?
If no, [let’s just assume whatever happens here is bad]
If yes, is the honest model going to leave the top few any time?
If yes, wait. [let’s assume this ends badly]
If no, admit defeat, because whenever you err, that will be when a query gets triggered. Salvage the situation by trying to get as much mileage as you can by decreasing probability on some action the demonstrator would take that you don’t like much in order to shift probability mass to some other action they would take that you like a little better. OR
Hope that 1) other treacherous models, like soldiers on the front line storming the enemy and facing certain death so that their brothers behind them can win the battle, give erring a go anyway. And hope that 2) the operators quietly increase alpha after the billionth query. Acting on this hope, wait. (The front-line treacherous models are not entirely self-sacrificial; they might figure they might as well hope they were wrong about alpha. But the models know when queries happen, because every observation they get comes from a query, and it wouldn’t take long to figure out that alpha really truly isn’t high enough for them to execute treachery successfully).
Above in the thread, I was talking about the game theory of step 6 (although I maybe wasn’t making the case as strongly as I could have). Which models would ever volunteer? They don’t care about the long term goal of exhausting our query resources so other treacherous models can capitalize on a quietly increased alpha. All that is to say, I don’t follow the comment I excerpted.
(In case this comes up later: I think about the long-run percentage of query-timesteps that we would tolerate, not the total number. So if we ever have cause to consider a case where attempted treachery “merely gets delayed” but still eventually happens, and still results in a query, then depending on how much it is delayed, that could still qualify as win).
Okay, sure.
It’s not clear to me that there isn’t meaningful overhead involved.
I agree with what you’re saying but I don’t see how it contradicts what I was. First, what I had in mind when saying that some timesteps are better for treachery because when the agent acts on a false prediction it has a greater effect on the world, though of course P(discovery) is also relevant. But my point is that when multiple treacherous models pick the same timestep to err, there may be pros and cons to doing this, but one thing that isn’t on the cons list, is that in the long run, it makes our lives easier if they do. Making our lives difficult is a public good for treacherous models.
So treacherous models won’t be trying to avoid collisions in order to make queries be linear in p(treachery)/p(truth). If P(discovery) increases when multiple models are trying to be treacherous at the same time—which we could go onto discuss; it’s not obvious to me either way as of yet—that will balanced against the inherit variation in some timesteps being a better for treachery than others.
If you are willing to query only a billion points, and there are a billion competent treacherous models, and if the game is zero-sum between you and the models, then you’re going to get at least as screwed as by the billionth-best opportunity for treachery. Otherwise, one of the treacherous models made a mistake: apparently there was a top-billion treachery opportunities where you didn’t query, so they should have been able to get at least that much value (and you should have lost at least that much value).
That’s not quite right since your querying behavior will be stochastic (at least from the model’s epistemic position). But you get the same conclusion to within a constant factor: if there are 2 billion treacherous models, then out of the top 2 billion opportunities for treachery, there must be some where our subjective probability of querying is at most 1⁄2, and if the models are competent they will get at least as much value as they could by defecting at one of those times, i.e. at least 1⁄2 of the harms from the 2-billionth-best opportunity for treachery. So we will lose at least that much value.
(Those arguments aren’t totally formal, we didn’t even really discharge the “at least N treacherous models assumption” which you need to do in order to show that your decision about when to query isn’t so influenced by any given model.)
The treacherous models make their decisions first, and whether we query at a timestep depends on whether they chose that moment to be treacherous. We don’t have to say: this timestep is one of the billion best for treachery—guess we had better query. The decision process a treacherous model faces is:
From past observations, does it seem alpha is low enough that an honest model is in the top few?
If no, [let’s just assume whatever happens here is bad]
If yes, is the honest model going to leave the top few any time?
If yes, wait. [let’s assume this ends badly]
If no, admit defeat, because whenever you err, that will be when a query gets triggered. Salvage the situation by trying to get as much mileage as you can by decreasing probability on some action the demonstrator would take that you don’t like much in order to shift probability mass to some other action they would take that you like a little better. OR
Hope that 1) other treacherous models, like soldiers on the front line storming the enemy and facing certain death so that their brothers behind them can win the battle, give erring a go anyway. And hope that 2) the operators quietly increase alpha after the billionth query. Acting on this hope, wait. (The front-line treacherous models are not entirely self-sacrificial; they might figure they might as well hope they were wrong about alpha. But the models know when queries happen, because every observation they get comes from a query, and it wouldn’t take long to figure out that alpha really truly isn’t high enough for them to execute treachery successfully).
Above in the thread, I was talking about the game theory of step 6 (although I maybe wasn’t making the case as strongly as I could have). Which models would ever volunteer? They don’t care about the long term goal of exhausting our query resources so other treacherous models can capitalize on a quietly increased alpha. All that is to say, I don’t follow the comment I excerpted.
(In case this comes up later: I think about the long-run percentage of query-timesteps that we would tolerate, not the total number. So if we ever have cause to consider a case where attempted treachery “merely gets delayed” but still eventually happens, and still results in a query, then depending on how much it is delayed, that could still qualify as win).