Does either the linked paywalled analysis, or the unpublished analysis, consider the question of what competitors might do outside of the management of their AI research projects and outside of the question of AI safety measures? How detailed and comprehensive is the view of everything they might do even within those limits?
Narrow game-theoretic models are easy to analyze, but likely to be badly misleading if they don’t faithfully reflect the entire game that’s actually being played.
If you’re a participant who really believes that there’s an existential risk in play, and that you’re in a “hard takeoff” world, then a rational response to a suspicion that somebody is way ahead of you might be to nuke them. Or start a conventional war with them, or try to get them restricted by their or your government, or deprive them of resources, or do other things to hinder them outside the limits of the “game”.
Even if you think that the important game will be between relatively normal private-sector actors who won’t have super-dramatic options like nukes and total warfare, they probably will have all kinds of unmodelled wildcard actions available to them… like any commercial dirty trick you can think of. And if everybody including governments and the public become convinced that there’s really existential risk in play, then even the most dramatic things are on the table. Even a third party who wasn’t doing research to begin with might choose to take such actions, so the set of players could get complicated.
Also, it seems like there’s a bit of a contradiction between the idea that a clear leader may feel it has breathing room to work on safety, and the idea of restricting information about the state of play. If there were secrecy and no effective spying, then how would you know whether you were the leader? Without information about what the other side was actually up to, the conservative assumption would be that they were at least as far along as you were, so you should make the minimum supportable investment in safety, and at the same time consider dramatic “outside the game” actions.
Not only does all of that make it more complicated to understand the positive or negative impact of spying on who wins the race, but some of the more dramatic moves might get pretty close to being existential risks in themselves.
I’m not sure I’d want to make any guesses, but it surely doesn’t seem at all supported that preventing espionage should be a priority.
It doesn’t even seem likely to me that it’s possible to create any game theoretic model that gives reliable insight into such a question. If you forget to include even one possible real-world move, then the model may fail catastrophically.
By the way, if you don’t like espionage, then another argument against adopting secrecy to begin with is without secrecy, there is no espionage, and therefore no potential risk associated with espionage (except, of course, for those risks shared with transparency itself). Of course, that might have to mean forcibly eliminating secrecy even if some or all of the players would prefer to have secrecy.
One should expect espionage to increase enmity between competitors
Why? When the stakes are high, even allies spy on one another all the time. If one side is spying, the other side is probably also spying. It’s normal. So how would it increase enmity?
Intuitively, I would expect essentially all of the strength of opposition in “hard takeoff” arms races to come either from desires to own the results[1], or from mutual convictions that “an AI aligned with those guys is worse than a random AI”. Spying seems like pretty small potatoes compared to those.
I tend to think that if something is actually powerful enough to represent an existential risk, then there’s a very strong, if rebuttable, presumption that no private organization, and maybe no instution of any kind we have today, ought to be allowed to “own” it at all. But that doesn’t mean that’s how things will actually play out…
I’m not sure I’d want to make any guesses, but it surely doesn’t seem at all supported that preventing espionage should be a priority.
It doesn’t even seem likely to me that it’s possible to create any game theoretic model that gives reliable insight into such a question. If you forget to include even one possible real-world move, then the model may fail catastrophically.
I think your skepticism here is radical; it proves too much. If you consistently applied it you’d be reduced to basically not making any guesses about the future at all.
By the way, if you don’t like espionage, then another argument against adopting secrecy to begin with is without secrecy, there is no espionage, and therefore no potential risk associated with espionage (except, of course, for those risks shared with transparency itself). Of course, that might have to mean forcibly eliminating secrecy even if some or all of the players would prefer to have secrecy.
What? The main reason for disliking espionage is that it decreases the lead of the leader. I suppose giving away everything (and thereby decreasing the lead even more) has the silver lining of maybe reducing enmity between projects… but it’s not worth doing unless you thought the enmity was the overwhelmingly dominant factor.
One should expect espionage to increase enmity between competitors
Why? When the stakes are high, even allies spy on one another all the time. If one side is spying, the other side is probably also spying. It’s normal. So how would it increase enmity?
But you don’t even think espionage increases emnity? It may be normal, but historically it does seem to increase emnity, and I can think of some mechanisms by which it might.
I think your skepticism here is radical; it proves too much. If you consistently applied it you’d be reduced to basically not making any guesses about the future at all.
I think that proves too much. I’m saying that game theory in particular is brittle, and that I’m not convinced given that only that brittle method has been brought to bear. That doesn’t mean that nothing can ever be convincing.
I will admit that I think something like espionage is probably unusually unpredictable, and maybe its effects can never be predicted very well… but that’s only about espionage. It doesn’t mean that nothing at all can be predicted.
On edit: … and if I were reduced to making no predictions, that wouldn’t mean I was wrong, just that useful predictions were, unfortunately, unavailable, no matter how desirable they might be...
The main reason for disliking espionage is that it decreases the lead of the leader. [...] unless you thought the enmity was the overwhelmingly dominant factor.
But, again, as you’ve described it, the value added by having a clear leader is mediated through their knowing that they’re the clear leader. If they don’t know, there’s no value.
But you don’t even think espionage increases emnity?
I think it’s probably insignificant compared to the “intrinsic” enmity in the scenario.
I’ve seen private sector actors get pretty incensed about industrial espionage… but I’m not sure it changed their actual level of competition very much. On the government side, there’s a whole ritual of talking about being upset when you find a spy, but it seems like it’s basically just that.
Also, it seems like there’s a bit of a contradiction between the idea that a clear leader may feel it has breathing room to work on safety, and the idea of restricting information about the state of play. If there were secrecy and no effective spying, then how would you know whether you were the leader? Without information about what the other side was actually up to, the conservative assumption would be that they were at least as far along as you were, so you should make the minimum supportable investment in safety, and at the same time consider dramatic “outside the game” actions.
In the first model, the effect of a close race increasing risk through corner cutting only happens when projects know how they are doing relative to their competitors. I think it is useful to distinguish two different kinds of secrecy. It is possible for the achievements of a project to be secret, or the techniques of a project to be secret, or both. In the Manhattan Project case, the existence of the Manhattan Project and the techniques for building nuclear bombs were both secret. But you can easily imagine an AI arms race where techniques are secret but the existence of competing projects or their general level of capabilities is not secret. In such a situation you can know about the size of leads without espionage. And adding espionage could decrease the size of leads and increase enmity, making a bad situation worse.
I think the “outside the game” criticism is interesting. I’m not sure whether it is correct or not, and I’m not sure if these models should be modified to account for it, but I will think about it.
I’ve seen private sector actors get pretty incensed about industrial espionage… but I’m not sure it changed their actual level of competition very much. On the government side, there’s a whole ritual of talking about being upset when you find a spy, but it seems like it’s basically just that.
I don’t think it’s fair to say that governments getting upset about spies is just talk. Or rather, governments assume that they are being spied on most of the time. When they find spies that they have already priced in, they don’t really react to that. But discovering a hitherto unsuspected spy in an especially sensitive role probably increases enmity a lot (but of course the amount will vary based on the nature of the government doing the discovering, the strategic situation, and the details of the case).
But you can easily imagine an AI arms race where techniques are secret but the existence of competing projects or their general level of capabilities is not secret.
How do you arrange for honest and credible disclosure of those things?
Does either the linked paywalled analysis, or the unpublished analysis, consider the question of what competitors might do outside of the management of their AI research projects and outside of the question of AI safety measures? How detailed and comprehensive is the view of everything they might do even within those limits?
Narrow game-theoretic models are easy to analyze, but likely to be badly misleading if they don’t faithfully reflect the entire game that’s actually being played.
If you’re a participant who really believes that there’s an existential risk in play, and that you’re in a “hard takeoff” world, then a rational response to a suspicion that somebody is way ahead of you might be to nuke them. Or start a conventional war with them, or try to get them restricted by their or your government, or deprive them of resources, or do other things to hinder them outside the limits of the “game”.
Even if you think that the important game will be between relatively normal private-sector actors who won’t have super-dramatic options like nukes and total warfare, they probably will have all kinds of unmodelled wildcard actions available to them… like any commercial dirty trick you can think of. And if everybody including governments and the public become convinced that there’s really existential risk in play, then even the most dramatic things are on the table. Even a third party who wasn’t doing research to begin with might choose to take such actions, so the set of players could get complicated.
Also, it seems like there’s a bit of a contradiction between the idea that a clear leader may feel it has breathing room to work on safety, and the idea of restricting information about the state of play. If there were secrecy and no effective spying, then how would you know whether you were the leader? Without information about what the other side was actually up to, the conservative assumption would be that they were at least as far along as you were, so you should make the minimum supportable investment in safety, and at the same time consider dramatic “outside the game” actions.
Not only does all of that make it more complicated to understand the positive or negative impact of spying on who wins the race, but some of the more dramatic moves might get pretty close to being existential risks in themselves.
I’m not sure I’d want to make any guesses, but it surely doesn’t seem at all supported that preventing espionage should be a priority.
It doesn’t even seem likely to me that it’s possible to create any game theoretic model that gives reliable insight into such a question. If you forget to include even one possible real-world move, then the model may fail catastrophically.
By the way, if you don’t like espionage, then another argument against adopting secrecy to begin with is without secrecy, there is no espionage, and therefore no potential risk associated with espionage (except, of course, for those risks shared with transparency itself). Of course, that might have to mean forcibly eliminating secrecy even if some or all of the players would prefer to have secrecy.
Why? When the stakes are high, even allies spy on one another all the time. If one side is spying, the other side is probably also spying. It’s normal. So how would it increase enmity?
Intuitively, I would expect essentially all of the strength of opposition in “hard takeoff” arms races to come either from desires to own the results[1], or from mutual convictions that “an AI aligned with those guys is worse than a random AI”. Spying seems like pretty small potatoes compared to those.
I tend to think that if something is actually powerful enough to represent an existential risk, then there’s a very strong, if rebuttable, presumption that no private organization, and maybe no instution of any kind we have today, ought to be allowed to “own” it at all. But that doesn’t mean that’s how things will actually play out…
I think your skepticism here is radical; it proves too much. If you consistently applied it you’d be reduced to basically not making any guesses about the future at all.
What? The main reason for disliking espionage is that it decreases the lead of the leader. I suppose giving away everything (and thereby decreasing the lead even more) has the silver lining of maybe reducing enmity between projects… but it’s not worth doing unless you thought the enmity was the overwhelmingly dominant factor.
But you don’t even think espionage increases emnity? It may be normal, but historically it does seem to increase emnity, and I can think of some mechanisms by which it might.
I think that proves too much. I’m saying that game theory in particular is brittle, and that I’m not convinced given that only that brittle method has been brought to bear. That doesn’t mean that nothing can ever be convincing.
I will admit that I think something like espionage is probably unusually unpredictable, and maybe its effects can never be predicted very well… but that’s only about espionage. It doesn’t mean that nothing at all can be predicted.
On edit: … and if I were reduced to making no predictions, that wouldn’t mean I was wrong, just that useful predictions were, unfortunately, unavailable, no matter how desirable they might be...
But, again, as you’ve described it, the value added by having a clear leader is mediated through their knowing that they’re the clear leader. If they don’t know, there’s no value.
I think it’s probably insignificant compared to the “intrinsic” enmity in the scenario.
I’ve seen private sector actors get pretty incensed about industrial espionage… but I’m not sure it changed their actual level of competition very much. On the government side, there’s a whole ritual of talking about being upset when you find a spy, but it seems like it’s basically just that.
Here is an unpaywalled version of the first model.
In the first model, the effect of a close race increasing risk through corner cutting only happens when projects know how they are doing relative to their competitors. I think it is useful to distinguish two different kinds of secrecy. It is possible for the achievements of a project to be secret, or the techniques of a project to be secret, or both. In the Manhattan Project case, the existence of the Manhattan Project and the techniques for building nuclear bombs were both secret. But you can easily imagine an AI arms race where techniques are secret but the existence of competing projects or their general level of capabilities is not secret. In such a situation you can know about the size of leads without espionage. And adding espionage could decrease the size of leads and increase enmity, making a bad situation worse.
I think the “outside the game” criticism is interesting. I’m not sure whether it is correct or not, and I’m not sure if these models should be modified to account for it, but I will think about it.
I don’t think it’s fair to say that governments getting upset about spies is just talk. Or rather, governments assume that they are being spied on most of the time. When they find spies that they have already priced in, they don’t really react to that. But discovering a hitherto unsuspected spy in an especially sensitive role probably increases enmity a lot (but of course the amount will vary based on the nature of the government doing the discovering, the strategic situation, and the details of the case).
How do you arrange for honest and credible disclosure of those things?