I think your skepticism here is radical; it proves too much. If you consistently applied it you’d be reduced to basically not making any guesses about the future at all.
I think that proves too much. I’m saying that game theory in particular is brittle, and that I’m not convinced given that only that brittle method has been brought to bear. That doesn’t mean that nothing can ever be convincing.
I will admit that I think something like espionage is probably unusually unpredictable, and maybe its effects can never be predicted very well… but that’s only about espionage. It doesn’t mean that nothing at all can be predicted.
On edit: … and if I were reduced to making no predictions, that wouldn’t mean I was wrong, just that useful predictions were, unfortunately, unavailable, no matter how desirable they might be...
The main reason for disliking espionage is that it decreases the lead of the leader. [...] unless you thought the enmity was the overwhelmingly dominant factor.
But, again, as you’ve described it, the value added by having a clear leader is mediated through their knowing that they’re the clear leader. If they don’t know, there’s no value.
But you don’t even think espionage increases emnity?
I think it’s probably insignificant compared to the “intrinsic” enmity in the scenario.
I’ve seen private sector actors get pretty incensed about industrial espionage… but I’m not sure it changed their actual level of competition very much. On the government side, there’s a whole ritual of talking about being upset when you find a spy, but it seems like it’s basically just that.
Also, it seems like there’s a bit of a contradiction between the idea that a clear leader may feel it has breathing room to work on safety, and the idea of restricting information about the state of play. If there were secrecy and no effective spying, then how would you know whether you were the leader? Without information about what the other side was actually up to, the conservative assumption would be that they were at least as far along as you were, so you should make the minimum supportable investment in safety, and at the same time consider dramatic “outside the game” actions.
In the first model, the effect of a close race increasing risk through corner cutting only happens when projects know how they are doing relative to their competitors. I think it is useful to distinguish two different kinds of secrecy. It is possible for the achievements of a project to be secret, or the techniques of a project to be secret, or both. In the Manhattan Project case, the existence of the Manhattan Project and the techniques for building nuclear bombs were both secret. But you can easily imagine an AI arms race where techniques are secret but the existence of competing projects or their general level of capabilities is not secret. In such a situation you can know about the size of leads without espionage. And adding espionage could decrease the size of leads and increase enmity, making a bad situation worse.
I think the “outside the game” criticism is interesting. I’m not sure whether it is correct or not, and I’m not sure if these models should be modified to account for it, but I will think about it.
I’ve seen private sector actors get pretty incensed about industrial espionage… but I’m not sure it changed their actual level of competition very much. On the government side, there’s a whole ritual of talking about being upset when you find a spy, but it seems like it’s basically just that.
I don’t think it’s fair to say that governments getting upset about spies is just talk. Or rather, governments assume that they are being spied on most of the time. When they find spies that they have already priced in, they don’t really react to that. But discovering a hitherto unsuspected spy in an especially sensitive role probably increases enmity a lot (but of course the amount will vary based on the nature of the government doing the discovering, the strategic situation, and the details of the case).
But you can easily imagine an AI arms race where techniques are secret but the existence of competing projects or their general level of capabilities is not secret.
How do you arrange for honest and credible disclosure of those things?
I think that proves too much. I’m saying that game theory in particular is brittle, and that I’m not convinced given that only that brittle method has been brought to bear. That doesn’t mean that nothing can ever be convincing.
I will admit that I think something like espionage is probably unusually unpredictable, and maybe its effects can never be predicted very well… but that’s only about espionage. It doesn’t mean that nothing at all can be predicted.
On edit: … and if I were reduced to making no predictions, that wouldn’t mean I was wrong, just that useful predictions were, unfortunately, unavailable, no matter how desirable they might be...
But, again, as you’ve described it, the value added by having a clear leader is mediated through their knowing that they’re the clear leader. If they don’t know, there’s no value.
I think it’s probably insignificant compared to the “intrinsic” enmity in the scenario.
I’ve seen private sector actors get pretty incensed about industrial espionage… but I’m not sure it changed their actual level of competition very much. On the government side, there’s a whole ritual of talking about being upset when you find a spy, but it seems like it’s basically just that.
Here is an unpaywalled version of the first model.
In the first model, the effect of a close race increasing risk through corner cutting only happens when projects know how they are doing relative to their competitors. I think it is useful to distinguish two different kinds of secrecy. It is possible for the achievements of a project to be secret, or the techniques of a project to be secret, or both. In the Manhattan Project case, the existence of the Manhattan Project and the techniques for building nuclear bombs were both secret. But you can easily imagine an AI arms race where techniques are secret but the existence of competing projects or their general level of capabilities is not secret. In such a situation you can know about the size of leads without espionage. And adding espionage could decrease the size of leads and increase enmity, making a bad situation worse.
I think the “outside the game” criticism is interesting. I’m not sure whether it is correct or not, and I’m not sure if these models should be modified to account for it, but I will think about it.
I don’t think it’s fair to say that governments getting upset about spies is just talk. Or rather, governments assume that they are being spied on most of the time. When they find spies that they have already priced in, they don’t really react to that. But discovering a hitherto unsuspected spy in an especially sensitive role probably increases enmity a lot (but of course the amount will vary based on the nature of the government doing the discovering, the strategic situation, and the details of the case).
How do you arrange for honest and credible disclosure of those things?