I understand that the practical bound is going to be logarithmic “for a while” but it seems like the theorem about runtime doesn’t help as much if that’s what we are (mostly) relying on, and there’s some additional analysis we need to do. That seems worth formalizing if we want to have a theorem, since that’s the step that we need to be correct.
There is at most a linear cost to this ratio, which I don’t think screws us.
If our models are a trillion bits, then it doesn’t seem that surprising to me if it takes 100 bits extra to specify an intended model relative to an effective treacherous model, and if you have a linear dependence that would be unworkable. In some sense it’s actively surprising if the very shortest intended vs treacherous models have description lengths within 0.0000001% of each other unless you have a very strong skills vs values separation. Overall I feel much more agnostic about this than you are.
There are two reasons why I expect that to hold.
This doesn’t seem like it works once you are selecting on competent treacherous models. Any competent model will err very rarely (with probability less than 1 / (feasible training time), probably much less). I don’t think that (say) 99% of smart treacherous models would make this particular kind of incompetent error?
I’m not convinced this logarithmic regime ever ends,
It seems like it must end if there are any treacherous models (i.e. if there was any legitimate concern about inner alignment at all).
That seems worth formalizing if we want to have a theorem, since that’s the step that we need to be correct.
I agree. Also, it might be very hard to formalize this in a way that’s not: write out in math what I said in English, and name it Assumption 2.
I don’t think that (say) 99% of smart treacherous models would make this particular kind of incompetent error?
I don’t think incompetence is the only reason to try to pull off a treacherous turn at the same time that other models do. Some timesteps are just more important, so there’s a trade off. And what’s traded off is a public good: among treacherous models, it is a public good for the models’ moments of treachery to be spread out. Spreading them out exhausts whatever data source is being called upon in moments of uncertainty. But for an individual model, it never reaps the benefit of helping to exhaust our resources. Given that treacherous models don’t have an incentive to be inconvenient to us in this way, I don’t think a failure to qualifies as incompetence. This is also my response to
It seems like [the logarithmic regime] must end if there are any treacherous models
Any competent model will err very rarely
Yeah, I’ve been assuming all treacherous models will err once. (Which is different from the RL case, where they can err repeatedly on off-policy predictions).
If our models are a trillion bits, then it doesn’t seem that surprising to me if it takes 100 bits extra to specify an intended model relative to an effective treacherous model, and if you have a linear dependence that would be unworkable. In some sense it’s actively surprising if the very shortest intended vs treacherous models have description lengths within 0.0000001% of each other unless you have a very strong skills vs values separation. Overall I feel much more agnostic about this than you are.
I can’t remember exactly where we left this in our previous discussions on this topic. My point estimate for the number of extra bits to specify an intended model relative to an effective treacherous model is negative. The treacherous model has to compute the truth, and then also decide when and how to execute treachery. So the subroutine they run to compute the truth, considered as a model in its own right, seems to me like it must be simpler.
I don’t think incompetence is the only reason to try to pull off a treacherous turn at the same time that other models do. Some timesteps are just more important, so there’s a trade off. And what’s traded off is a public good: among treacherous models, it is a public good for the models’ moments of treachery to be spread out.
Trying to defect at time T is only a good idea if it’s plausible that your mechanism isn’t going to notice the uncertainty at time T and then query the human. So it seems to me like this argument can never drive P(successful treachery) super low, or else it would be self-defeating (for competent treacherous agents).
The treacherous model has to compute the truth, and then also decide when and how to execute treachery. So the subroutine they run to compute the truth, considered as a model in its own right, seems to me like it must be simpler.
Subroutines of functions aren’t always simpler. Even without treachery this seems like it’s basically guaranteed. If “the truth” is just a simulation of a human, then the truth is a subroutine of the simplest model. But instead you could have a model of the form “Run program X, treat its output as a program, then run that.” Since humans are compressible that will clearly be smaller.
Now you want to say that whatever reasoning a treacherous agent does in order to compress the human, you can just promote that to the outside model as well. But that argument seems to be precisely where the meat is (and it’s the kind of thing I spend most of time on). If that works then it seems like you don’t even need a further solution to inner alignment, just use the simplest model.
(Maybe there is a hope that the intended model won’t be “much” more complex than the treacherous models, without literally saying that it’s the simplest, but at that point I’m back to wondering whether “much” is like 0.01% or 0.00000001%.)
So it seems to me like this argument can never drive P(successful treachery) super low, or else it would be self-defeating (for competent treacherous agents)
I don’t follow. Can’t races to the bottom destroy all value for the agents involved? Consider the claim: firms will never set prices to equal marginal cost because that would destroy all profit.
Now you want to say that whatever reasoning a treacherous agent does in order to compress the human, you can just promote that to the outside model as well. But that argument seems to be precisely where the meat is (and it’s the kind of thing I spend most of time on). If that works then it seems like you don’t even need a further solution to inner alignment, just use the simplest model.
Yeah, okay. This is my mainline opinion, and I just think about inner alignment in case it’s wrong. So for now we can assume that it is wrong, since there’s not much more to say if it’s right.
Maybe there is a hope that the intended model won’t be “much” more complex than the treacherous models, without literally saying that it’s the simplest, but at that point I’m back to wondering whether “much” is like 0.01% or 0.00000001%
That is my next guess, but I hadn’t thought in terms of percentages before this. I had thought of normally distributed noise in the number of bits.
It’s occurring to me that this question doesn’t matter to our present discussion. What makes the linear regime linear rather logarithmic is that if p(treachery)/p(honest model) is high, that allows for a large number of treacherous models to have greater posterior weight than the truth. But if a single model has n times the posterior weight of the truth, it still only takes one query to the demonstrator to interrupt its treacherous turn, not n queries.
I don’t follow. Can’t races to the bottom destroy all value for the agents involved?
You are saying that a special moment is a particularly great one to be treacherous. But if P(discovery) is 99.99% during that period, and there is any other treachery-possible period where P(discovery) is small, then that other period would have been better after all. Right?
This doesn’t seem analogous to producers driving down profits to zero, because those firms had no other opportunity to make a profit with their machine. It’s like you saying: there are tons of countries where firms could use their machines to make stuff and sell it at a profit (more countries than firms). But some of the places are more attractive than others, so probably everyone will sell in those places and drive profits to zero. And I’m saying: but then aren’t those incredibly-congested countries actually worse places to sell? This scenario is only possible if firms are making so much stuff that they can drive profit down to zero in every country, since any country with remaining profit would necessarily be the best place to sell.
Yeah, okay. This is my mainline opinion, and I just think about inner alignment in case it’s wrong
It seems to me like this is clearly wrong in the limit (since simple consequentialists would take over simple physics). It also seems probably wrong to me for smaller models (i.e. without huge amounts of internal selection) but it would be good for someone to think about that case more seriously.
It’s occurring to me that this question doesn’t matter to our present discussion. What makes the linear regime linear rather logarithmic is that if p(treachery)/p(honest model) is high, that allows for a large number of treacherous models to have greater posterior weight than the truth. But if a single model has n times the posterior weight of the truth, it still only takes one query to the demonstrator to interrupt its treacherous turn, not n queries.
if the treacherous model is 100 bits smaller, then it feels like there must be around 2^100 treacherous models that are all simpler than the intended one. If nothing else, it seems like you could literally add garbage bits to the treacherous models (or useful bits!).
If nothing else, it seems like you could literally add garbage bits to the treacherous models
Okay, sure.
It seems to me like this is clearly wrong in the limit (since simple consequentialists would take over simple physics).
It’s not clear to me that there isn’t meaningful overhead involved.
You are saying that a special moment is a particularly great one to be treacherous. But if P(discovery) is 99.99% during that period, and there is any other treachery-possible period where P(discovery) is small, then that other period would have been better after all. Right?
I agree with what you’re saying but I don’t see how it contradicts what I was. First, what I had in mind when saying that some timesteps are better for treachery because when the agent acts on a false prediction it has a greater effect on the world, though of course P(discovery) is also relevant. But my point is that when multiple treacherous models pick the same timestep to err, there may be pros and cons to doing this, but one thing that isn’t on the cons list, is that in the long run, it makes our lives easier if they do. Making our lives difficult is a public good for treacherous models.
So treacherous models won’t be trying to avoid collisions in order to make queries be linear in p(treachery)/p(truth). If P(discovery) increases when multiple models are trying to be treacherous at the same time—which we could go onto discuss; it’s not obvious to me either way as of yet—that will balanced against the inherit variation in some timesteps being a better for treachery than others.
So treacherous models won’t be trying to avoid collisions in order to make queries be linear in p(treachery)/p(truth). If P(discovery) increases when multiple models are trying to be treacherous at the same time—which we could go onto discuss; it’s not obvious to me either way as of yet—that will balanced against the inherit variation in some timesteps being a better for treachery than others.
If you are willing to query only a billion points, and there are a billion competent treacherous models, and if the game is zero-sum between you and the models, then you’re going to get at least as screwed as by the billionth-best opportunity for treachery. Otherwise, one of the treacherous models made a mistake: apparently there was a top-billion treachery opportunities where you didn’t query, so they should have been able to get at least that much value (and you should have lost at least that much value).
That’s not quite right since your querying behavior will be stochastic (at least from the model’s epistemic position). But you get the same conclusion to within a constant factor: if there are 2 billion treacherous models, then out of the top 2 billion opportunities for treachery, there must be some where our subjective probability of querying is at most 1⁄2, and if the models are competent they will get at least as much value as they could by defecting at one of those times, i.e. at least 1⁄2 of the harms from the 2-billionth-best opportunity for treachery. So we will lose at least that much value.
(Those arguments aren’t totally formal, we didn’t even really discharge the “at least N treacherous models assumption” which you need to do in order to show that your decision about when to query isn’t so influenced by any given model.)
If you are willing to query only a billion points, and there are a billion competent treacherous models, and if the game is zero-sum between you and the models, then you’re going to get at least as screwed as by the billionth-best opportunity for treachery.
The treacherous models make their decisions first, and whether we query at a timestep depends on whether they chose that moment to be treacherous. We don’t have to say: this timestep is one of the billion best for treachery—guess we had better query. The decision process a treacherous model faces is:
From past observations, does it seem alpha is low enough that an honest model is in the top few?
If no, [let’s just assume whatever happens here is bad]
If yes, is the honest model going to leave the top few any time?
If yes, wait. [let’s assume this ends badly]
If no, admit defeat, because whenever you err, that will be when a query gets triggered. Salvage the situation by trying to get as much mileage as you can by decreasing probability on some action the demonstrator would take that you don’t like much in order to shift probability mass to some other action they would take that you like a little better. OR
Hope that 1) other treacherous models, like soldiers on the front line storming the enemy and facing certain death so that their brothers behind them can win the battle, give erring a go anyway. And hope that 2) the operators quietly increase alpha after the billionth query. Acting on this hope, wait. (The front-line treacherous models are not entirely self-sacrificial; they might figure they might as well hope they were wrong about alpha. But the models know when queries happen, because every observation they get comes from a query, and it wouldn’t take long to figure out that alpha really truly isn’t high enough for them to execute treachery successfully).
Above in the thread, I was talking about the game theory of step 6 (although I maybe wasn’t making the case as strongly as I could have). Which models would ever volunteer? They don’t care about the long term goal of exhausting our query resources so other treacherous models can capitalize on a quietly increased alpha. All that is to say, I don’t follow the comment I excerpted.
(In case this comes up later: I think about the long-run percentage of query-timesteps that we would tolerate, not the total number. So if we ever have cause to consider a case where attempted treachery “merely gets delayed” but still eventually happens, and still results in a query, then depending on how much it is delayed, that could still qualify as win).
I’m curious what you make of the following argument.
When an infinite sequence is sampled from a true model μ0, there is likely to be another treacherous model μ1 which is likely to end up with greater posterior weight than an honest model ^μ0, and greater still than the posterior on the true model μ0.
If the sequence were sampled from μ1 instead, the eventual posterior weight on μ1 will probably be at least as high.
When an infinite sequence is sampled from a true model μ1, there is likely to be another treacherous model μ2, which is likely to end up with greater posterior weight than an honest model ^μ1, and greater still than the posterior on the true model μ1.
The first treacherous model works by replacing the bad simplicity prior with a better prior, and then using the better prior to more quickly infer the true model. No reason for the same thing to happen a second time.
(Well, I guess the argument works if you push out to longer and longer sequence lengths—a treacherous model will beat the true model on sequence lengths a billion, and then for sequence lengths a trillion a different treacherous model will win, and for sequence lengths a quadrillion a still different treacherous model will win. Before even thinking about the fact that each particular treacherous model will in fact defect at some point and at that point drop out of the posterior.)
Does it make sense to talk about ˇμ1, which is like μ1 in being treacherous, but is uses the true model μ0 instead of the honest model ^μ0? I guess you would expect ˇμ1 to have a lower posterior than μ0?
I don’t think incompetence is the only reason to try to pull off a treacherous turn at the same time that other models do. Some timesteps are just more important, so there’s a trade off. And what’s traded off is a public good: among treacherous models, it is a public good for the models’ moments of treachery to be spread out. Spreading them out exhausts whatever data source is being called upon in moments of uncertainty. But for an individual model, it never reaps the benefit of helping to exhaust our resources. Given that treacherous models don’t have an incentive to be inconvenient to us in this way, I don’t think a failure to qualifies as incompetence.
It seems to me like this argument is requiring the deceptive models not being able to coordinate with each other very effectively, which seems unlikely to me—why shouldn’t we expect them to be able to solve these sorts of coordination problems?
They’re one shot. They’re not interacting causally. When any pair of models cooperates, a third model defecting benefits just as much from their cooperation with each other. And there will be defectors—we can write them down. So cooperating models would have to cooperate with defectors and cooperators indiscriminately, which I think is just a bad move according to any decision theory. All the LDT stuff I’ve seen on the topic is how to make one’s own cooperation/defection depend logically on the cooperation/defection of others.
Well, just like we can write down the defectors, we can also write down the cooperators—these sorts of models do exist, regardless of whether they’re implementing a decision theory we would call reasonable. And in this situation, the cooperators should eventually outcompete the defectors, until the cooperators all defect—which, if the true model has a low enough prior, they could do only once they’ve pushed the true model out of the top N.
Well, just like we can write down the defectors, we can also write down the cooperators
If it’s only the case that we can write them down, but they’re not likely to arise naturally as simple consequentialists taking over simple physics, then that extra description length will be seriously costly to them, and we won’t need to worry about any role they might play in p(treacherous)/p(truth). Meanwhile, when I was saying we could write down some defectors, I wasn’t making a simultaneous claim about their relative prior weight, only that their existence would spoil cooperation.
And in this situation, the cooperators should eventually outcompete the defector
For cooperators to outcompete defectors, they would have to be getting a larger share of the gains from cooperation than defectors do. If some people are waiting for the fruit on public trees to ripen before eating, and some people aren’t, the defectors will be the ones eating the almost ripe fruit.
if the true model has a low enough prior, [treacherous models cooperating with each other in separating their treacherous turns] could [be treacherous] only once they’ve pushed the true model out of the top N
I might be misunderstanding this statement. The inverse of the posterior on the truth is a supermartingale (doesn’t go up in expectation), so I don’t know what it could mean for the true model to get pushed out.
Here’s the setup I’m imagining, but perhaps I’m still misunderstanding something. Suppose you have a bunch of deceptive models that choose to cooperate and have larger weight in the prior than the true model (I think you believe this is very unlikely, though I’m more ambivalent). Specifically, they cooperate in that they perfectly mimic the true model up until the point where the deceptive models make up enough of the posterior that the true model is no longer being consulted, at which point they all simultaneously defect. This allows for arbitrarily bad worst-case behavior.
Specifically, they cooperate in that they perfectly mimic the true model up until the point where...
This thread began by considering deceptive models cooperating with each other in the sense of separating the timing of their treacherous turns in order to be maximally annoying to us. So maybe our discussion on that topic is resolved, and we can move on to this scenario.
up until the point where the deceptive models make up enough of the posterior that the true model is no longer being consulted
if alpha is low enough, this won’t ever happen, and if alpha is too high, it won’t take very long. So I don’t think this scenario is quite right.
Then the question becomes, for an alpha that’s low enough, how long will it take until queries are infrequent, noting that you need a query any time any treacherous model with enough weight decides to take a treacherous turn?
To extended Evan’s comment about coordination between deceptive models: Even if the deceptive models lack relevant game theoretical mechanisms, they may still coordinate due to being (partially) aligned with each other. For example, a deceptive model X may prefer [some other random deceptive model seizing control] over [model X seizing control with probability 0.1% and the entire experiment being terminated with probability 99.9%].
Why should we assume that the deceptive models will be sufficiently misaligned with each other such that this will not be an issue? Do you have intuitions about the degree of misalignment between huge neural networks that were trained to imitate demonstrators but ended up being consequentialists that care about the state of our universe?
That’s possible. But it seems like way less of a convergent instrumental goal for agents living in a simulated world-models. Both options—our world optimized by us and our world optimized by a random deceptive model—probably contain very little of value as judged by agents in another random deceptive model.
So yeah, I would say some models would think like this, but I would expect the total weight on models that do to be much lower.
I understand that the practical bound is going to be logarithmic “for a while” but it seems like the theorem about runtime doesn’t help as much if that’s what we are (mostly) relying on, and there’s some additional analysis we need to do. That seems worth formalizing if we want to have a theorem, since that’s the step that we need to be correct.
If our models are a trillion bits, then it doesn’t seem that surprising to me if it takes 100 bits extra to specify an intended model relative to an effective treacherous model, and if you have a linear dependence that would be unworkable. In some sense it’s actively surprising if the very shortest intended vs treacherous models have description lengths within 0.0000001% of each other unless you have a very strong skills vs values separation. Overall I feel much more agnostic about this than you are.
This doesn’t seem like it works once you are selecting on competent treacherous models. Any competent model will err very rarely (with probability less than 1 / (feasible training time), probably much less). I don’t think that (say) 99% of smart treacherous models would make this particular kind of incompetent error?
It seems like it must end if there are any treacherous models (i.e. if there was any legitimate concern about inner alignment at all).
I agree. Also, it might be very hard to formalize this in a way that’s not: write out in math what I said in English, and name it Assumption 2.
I don’t think incompetence is the only reason to try to pull off a treacherous turn at the same time that other models do. Some timesteps are just more important, so there’s a trade off. And what’s traded off is a public good: among treacherous models, it is a public good for the models’ moments of treachery to be spread out. Spreading them out exhausts whatever data source is being called upon in moments of uncertainty. But for an individual model, it never reaps the benefit of helping to exhaust our resources. Given that treacherous models don’t have an incentive to be inconvenient to us in this way, I don’t think a failure to qualifies as incompetence. This is also my response to
Yeah, I’ve been assuming all treacherous models will err once. (Which is different from the RL case, where they can err repeatedly on off-policy predictions).
I can’t remember exactly where we left this in our previous discussions on this topic. My point estimate for the number of extra bits to specify an intended model relative to an effective treacherous model is negative. The treacherous model has to compute the truth, and then also decide when and how to execute treachery. So the subroutine they run to compute the truth, considered as a model in its own right, seems to me like it must be simpler.
Trying to defect at time T is only a good idea if it’s plausible that your mechanism isn’t going to notice the uncertainty at time T and then query the human. So it seems to me like this argument can never drive P(successful treachery) super low, or else it would be self-defeating (for competent treacherous agents).
Subroutines of functions aren’t always simpler. Even without treachery this seems like it’s basically guaranteed. If “the truth” is just a simulation of a human, then the truth is a subroutine of the simplest model. But instead you could have a model of the form “Run program X, treat its output as a program, then run that.” Since humans are compressible that will clearly be smaller.
Now you want to say that whatever reasoning a treacherous agent does in order to compress the human, you can just promote that to the outside model as well. But that argument seems to be precisely where the meat is (and it’s the kind of thing I spend most of time on). If that works then it seems like you don’t even need a further solution to inner alignment, just use the simplest model.
(Maybe there is a hope that the intended model won’t be “much” more complex than the treacherous models, without literally saying that it’s the simplest, but at that point I’m back to wondering whether “much” is like 0.01% or 0.00000001%.)
I don’t follow. Can’t races to the bottom destroy all value for the agents involved? Consider the claim: firms will never set prices to equal marginal cost because that would destroy all profit.
Yeah, okay. This is my mainline opinion, and I just think about inner alignment in case it’s wrong. So for now we can assume that it is wrong, since there’s not much more to say if it’s right.
That is my next guess, but I hadn’t thought in terms of percentages before this. I had thought of normally distributed noise in the number of bits.
It’s occurring to me that this question doesn’t matter to our present discussion. What makes the linear regime linear rather logarithmic is that if p(treachery)/p(honest model) is high, that allows for a large number of treacherous models to have greater posterior weight than the truth. But if a single model has n times the posterior weight of the truth, it still only takes one query to the demonstrator to interrupt its treacherous turn, not n queries.
You are saying that a special moment is a particularly great one to be treacherous. But if P(discovery) is 99.99% during that period, and there is any other treachery-possible period where P(discovery) is small, then that other period would have been better after all. Right?
This doesn’t seem analogous to producers driving down profits to zero, because those firms had no other opportunity to make a profit with their machine. It’s like you saying: there are tons of countries where firms could use their machines to make stuff and sell it at a profit (more countries than firms). But some of the places are more attractive than others, so probably everyone will sell in those places and drive profits to zero. And I’m saying: but then aren’t those incredibly-congested countries actually worse places to sell? This scenario is only possible if firms are making so much stuff that they can drive profit down to zero in every country, since any country with remaining profit would necessarily be the best place to sell.
It seems to me like this is clearly wrong in the limit (since simple consequentialists would take over simple physics). It also seems probably wrong to me for smaller models (i.e. without huge amounts of internal selection) but it would be good for someone to think about that case more seriously.
if the treacherous model is 100 bits smaller, then it feels like there must be around 2^100 treacherous models that are all simpler than the intended one. If nothing else, it seems like you could literally add garbage bits to the treacherous models (or useful bits!).
Okay, sure.
It’s not clear to me that there isn’t meaningful overhead involved.
I agree with what you’re saying but I don’t see how it contradicts what I was. First, what I had in mind when saying that some timesteps are better for treachery because when the agent acts on a false prediction it has a greater effect on the world, though of course P(discovery) is also relevant. But my point is that when multiple treacherous models pick the same timestep to err, there may be pros and cons to doing this, but one thing that isn’t on the cons list, is that in the long run, it makes our lives easier if they do. Making our lives difficult is a public good for treacherous models.
So treacherous models won’t be trying to avoid collisions in order to make queries be linear in p(treachery)/p(truth). If P(discovery) increases when multiple models are trying to be treacherous at the same time—which we could go onto discuss; it’s not obvious to me either way as of yet—that will balanced against the inherit variation in some timesteps being a better for treachery than others.
If you are willing to query only a billion points, and there are a billion competent treacherous models, and if the game is zero-sum between you and the models, then you’re going to get at least as screwed as by the billionth-best opportunity for treachery. Otherwise, one of the treacherous models made a mistake: apparently there was a top-billion treachery opportunities where you didn’t query, so they should have been able to get at least that much value (and you should have lost at least that much value).
That’s not quite right since your querying behavior will be stochastic (at least from the model’s epistemic position). But you get the same conclusion to within a constant factor: if there are 2 billion treacherous models, then out of the top 2 billion opportunities for treachery, there must be some where our subjective probability of querying is at most 1⁄2, and if the models are competent they will get at least as much value as they could by defecting at one of those times, i.e. at least 1⁄2 of the harms from the 2-billionth-best opportunity for treachery. So we will lose at least that much value.
(Those arguments aren’t totally formal, we didn’t even really discharge the “at least N treacherous models assumption” which you need to do in order to show that your decision about when to query isn’t so influenced by any given model.)
The treacherous models make their decisions first, and whether we query at a timestep depends on whether they chose that moment to be treacherous. We don’t have to say: this timestep is one of the billion best for treachery—guess we had better query. The decision process a treacherous model faces is:
From past observations, does it seem alpha is low enough that an honest model is in the top few?
If no, [let’s just assume whatever happens here is bad]
If yes, is the honest model going to leave the top few any time?
If yes, wait. [let’s assume this ends badly]
If no, admit defeat, because whenever you err, that will be when a query gets triggered. Salvage the situation by trying to get as much mileage as you can by decreasing probability on some action the demonstrator would take that you don’t like much in order to shift probability mass to some other action they would take that you like a little better. OR
Hope that 1) other treacherous models, like soldiers on the front line storming the enemy and facing certain death so that their brothers behind them can win the battle, give erring a go anyway. And hope that 2) the operators quietly increase alpha after the billionth query. Acting on this hope, wait. (The front-line treacherous models are not entirely self-sacrificial; they might figure they might as well hope they were wrong about alpha. But the models know when queries happen, because every observation they get comes from a query, and it wouldn’t take long to figure out that alpha really truly isn’t high enough for them to execute treachery successfully).
Above in the thread, I was talking about the game theory of step 6 (although I maybe wasn’t making the case as strongly as I could have). Which models would ever volunteer? They don’t care about the long term goal of exhausting our query resources so other treacherous models can capitalize on a quietly increased alpha. All that is to say, I don’t follow the comment I excerpted.
(In case this comes up later: I think about the long-run percentage of query-timesteps that we would tolerate, not the total number. So if we ever have cause to consider a case where attempted treachery “merely gets delayed” but still eventually happens, and still results in a query, then depending on how much it is delayed, that could still qualify as win).
This is a bit of a sidebar:
I’m curious what you make of the following argument.
When an infinite sequence is sampled from a true model μ0, there is likely to be another treacherous model μ1 which is likely to end up with greater posterior weight than an honest model ^μ0, and greater still than the posterior on the true model μ0.
If the sequence were sampled from μ1 instead, the eventual posterior weight on μ1 will probably be at least as high.
When an infinite sequence is sampled from a true model μ1, there is likely to be another treacherous model μ2, which is likely to end up with greater posterior weight than an honest model ^μ1, and greater still than the posterior on the true model μ1.
And so on.
The first treacherous model works by replacing the bad simplicity prior with a better prior, and then using the better prior to more quickly infer the true model. No reason for the same thing to happen a second time.
(Well, I guess the argument works if you push out to longer and longer sequence lengths—a treacherous model will beat the true model on sequence lengths a billion, and then for sequence lengths a trillion a different treacherous model will win, and for sequence lengths a quadrillion a still different treacherous model will win. Before even thinking about the fact that each particular treacherous model will in fact defect at some point and at that point drop out of the posterior.)
Does it make sense to talk about ˇμ1, which is like μ1 in being treacherous, but is uses the true model μ0 instead of the honest model ^μ0? I guess you would expect ˇμ1 to have a lower posterior than μ0?
It seems to me like this argument is requiring the deceptive models not being able to coordinate with each other very effectively, which seems unlikely to me—why shouldn’t we expect them to be able to solve these sorts of coordination problems?
They’re one shot. They’re not interacting causally. When any pair of models cooperates, a third model defecting benefits just as much from their cooperation with each other. And there will be defectors—we can write them down. So cooperating models would have to cooperate with defectors and cooperators indiscriminately, which I think is just a bad move according to any decision theory. All the LDT stuff I’ve seen on the topic is how to make one’s own cooperation/defection depend logically on the cooperation/defection of others.
Well, just like we can write down the defectors, we can also write down the cooperators—these sorts of models do exist, regardless of whether they’re implementing a decision theory we would call reasonable. And in this situation, the cooperators should eventually outcompete the defectors, until the cooperators all defect—which, if the true model has a low enough prior, they could do only once they’ve pushed the true model out of the top N.
If it’s only the case that we can write them down, but they’re not likely to arise naturally as simple consequentialists taking over simple physics, then that extra description length will be seriously costly to them, and we won’t need to worry about any role they might play in p(treacherous)/p(truth). Meanwhile, when I was saying we could write down some defectors, I wasn’t making a simultaneous claim about their relative prior weight, only that their existence would spoil cooperation.
For cooperators to outcompete defectors, they would have to be getting a larger share of the gains from cooperation than defectors do. If some people are waiting for the fruit on public trees to ripen before eating, and some people aren’t, the defectors will be the ones eating the almost ripe fruit.
I might be misunderstanding this statement. The inverse of the posterior on the truth is a supermartingale (doesn’t go up in expectation), so I don’t know what it could mean for the true model to get pushed out.
Here’s the setup I’m imagining, but perhaps I’m still misunderstanding something. Suppose you have a bunch of deceptive models that choose to cooperate and have larger weight in the prior than the true model (I think you believe this is very unlikely, though I’m more ambivalent). Specifically, they cooperate in that they perfectly mimic the true model up until the point where the deceptive models make up enough of the posterior that the true model is no longer being consulted, at which point they all simultaneously defect. This allows for arbitrarily bad worst-case behavior.
This thread began by considering deceptive models cooperating with each other in the sense of separating the timing of their treacherous turns in order to be maximally annoying to us. So maybe our discussion on that topic is resolved, and we can move on to this scenario.
if alpha is low enough, this won’t ever happen, and if alpha is too high, it won’t take very long. So I don’t think this scenario is quite right.
Then the question becomes, for an alpha that’s low enough, how long will it take until queries are infrequent, noting that you need a query any time any treacherous model with enough weight decides to take a treacherous turn?
To extended Evan’s comment about coordination between deceptive models: Even if the deceptive models lack relevant game theoretical mechanisms, they may still coordinate due to being (partially) aligned with each other. For example, a deceptive model X may prefer [some other random deceptive model seizing control] over [model X seizing control with probability 0.1% and the entire experiment being terminated with probability 99.9%].
Why should we assume that the deceptive models will be sufficiently misaligned with each other such that this will not be an issue? Do you have intuitions about the degree of misalignment between huge neural networks that were trained to imitate demonstrators but ended up being consequentialists that care about the state of our universe?
That’s possible. But it seems like way less of a convergent instrumental goal for agents living in a simulated world-models. Both options—our world optimized by us and our world optimized by a random deceptive model—probably contain very little of value as judged by agents in another random deceptive model.
So yeah, I would say some models would think like this, but I would expect the total weight on models that do to be much lower.