Fear of others stealing your ideas is a crank trope, which suggests it may be a common human failure mode.
I think it might be correct in the entrepreneur/startup world, but it probably isn’t when it comes to technologies that are this powerful. Just think of nuclear espionage and of the kind of security that surrounds the development of military and intelligence hardware and software. If you’re building something that could overthrow all the power structures in the world, it would be surprising if nobody tried to spy on you (or worse; kill you, derail the project, steal the almost finished code, etc).
I’m not saying it only applies to the SIAI (though my original post was directed only at them, my question here is about the AGI research world in general, which includes the SIAI), or that it isn’t just one of many many things that can go wrong. But I still think that when you’re playing with stuff this powerful, you should be concerned with security and not just expect to forever fly under the radar.
“Just think of nuclear espionage and of the kind of security that surrounds the development of military and intelligence hardware and software.”
The reason the idea of the nuclear chain reaction was kept secret, was because one man named Leo Szilard realized the damage it could do, and had his patent for the idea classified as a military secret. It wasn’t kept secret by default; if it weren’t for Szilard, it would probably have been published in physics journals like every other cool new idea about atoms, and the Nazis might well have gotten nukes before we did.
“If you’re building something that could overthrow all the power structures in the world, it would be surprising if nobody tried to spy on you (or worse; kill you, derail the project, steal the almost finished code, etc).”
Only if they believe you, which they almost certainly won’t. Even in the (unlikely) case that someone thought that an AI taking over the world was realistic, there’s still an additional burden of proof on top of that, because they’d also have to believe that SIAI is competent enough to have a decent shot at pulling it off, in a field where so many others have failed.
So you’re saying SIAI is deliberately appearing incompetent and far from its goal to avoid being attacked?
ETA: I realize you’re probably not saying it’s doing that already, but you certainly suggest that it’s going to be in SIAI’s best interests going forward.
Let’s be realistic here—the AGI research world is a small fringe element of AI research in general. The AGI research world generally has a high opinion of its own importance—an opinion not generally shared by the AI research world in general, or the world as a whole.
We are in a self-selected group of people who share our beliefs. This will bias our thinking, leading us to be too confident of our shared beliefs. We need to strive to counter that effect and keep a sense of perspective, particularly when we’re trying to anticipate what other people are likely to do.
Either the creation of smarter than human intelligence is the most powerful thing in the world, or it isn’t.
If it is, it would be surprising if nobody in the powerful organizations I’m talking about realizes it, especially if a few breakthroughs are made public and as we get closer to AGI.
If that is the case, this probably means that at some point AGI researchers will be “on the radar” of these people and that they should at least think about preparing for that day.
You can’t have your cake and eat it too; you can’t believe that AGI is the most important thing in the world and simultaneously think that it’s so unimportant that nobody’s going to bother with it.
I’m not saying that right now there is much danger of that . But if we can’t predict when AGI is going to happen (which means wide confidence bounds, 5 years to 100 years, as Eliezer once said.), then we don’t know how soon we should start thinking about security, which probably means that as soon as possible is best.
“If it is, it would be surprising if nobody in the powerful organizations I’m talking about realizes it, especially if a few breakthroughs are made public and as we get closer to AGI.”
Nobody in powerful organizations realizes important things all the time. To take a case study, in 2001, Warren Buffett and Ted Turner just happened to notice that there were hundreds of nukes in Russia, sitting around in lightly guarded or unguarded facilities, which anyone with a few AK-47s and a truck could have just walked in and stolen. They had to start their own organization, called the Nuclear Threat Initiative, to take care of the problem, because no one else was doing anything.
The existence of historical examples where people in powerful organizations failed to realize important things is not evidence that it is the norm or that it can be counted on with strong confidence.
Yes, it is. How could examples of X not be evidence that the “norm is X”? It may not be sufficiently strong evidence, but if this one example is not sufficiently damning, there are certainlyplentymore.
Yes, of course it is weak evidence. But I can come up with a dozen examples off the top of my head where powerful organizations did realize important things, so you’re examples are very weak evidence that this behavior is the norm. So weak that it can be regarded as negligible.
Important things that weren’t recognized by the wider populace as important things? Do you have citations? Even for much more mundane things, governments routinely fail to either notice them, or to act once they have noticed. Eg., Chamberlain didn’t notice that Hitler wanted total control of Europe, even though he said so in his publicly-available book Mein Kampf. Stalin didn’t notice that Hitler was about to invade, even though he had numerous warnings from his subordinates.
The reason to point out the crackpot aspect (e.g. item 12 in Baez’s Crackpot Index) is to adjust how people think about this question, not to argue that the question shouldn’t be asked or answered.
In particular, I want people to balance (at least) two dangers—the danger of idea-stealing and the danger of insularity slowing down innovation.
I think it might be correct in the entrepreneur/startup world, but it probably isn’t when it comes to technologies that are this powerful. Just think of nuclear espionage and of the kind of security that surrounds the development of military and intelligence hardware and software. If you’re building something that could overthrow all the power structures in the world, it would be surprising if nobody tried to spy on you (or worse; kill you, derail the project, steal the almost finished code, etc).
I’m not saying it only applies to the SIAI (though my original post was directed only at them, my question here is about the AGI research world in general, which includes the SIAI), or that it isn’t just one of many many things that can go wrong. But I still think that when you’re playing with stuff this powerful, you should be concerned with security and not just expect to forever fly under the radar.
“Just think of nuclear espionage and of the kind of security that surrounds the development of military and intelligence hardware and software.”
The reason the idea of the nuclear chain reaction was kept secret, was because one man named Leo Szilard realized the damage it could do, and had his patent for the idea classified as a military secret. It wasn’t kept secret by default; if it weren’t for Szilard, it would probably have been published in physics journals like every other cool new idea about atoms, and the Nazis might well have gotten nukes before we did.
“If you’re building something that could overthrow all the power structures in the world, it would be surprising if nobody tried to spy on you (or worse; kill you, derail the project, steal the almost finished code, etc).”
Only if they believe you, which they almost certainly won’t. Even in the (unlikely) case that someone thought that an AI taking over the world was realistic, there’s still an additional burden of proof on top of that, because they’d also have to believe that SIAI is competent enough to have a decent shot at pulling it off, in a field where so many others have failed.
So you’re saying SIAI is deliberately appearing incompetent and far from its goal to avoid being attacked?
ETA: I realize you’re probably not saying it’s doing that already, but you certainly suggest that it’s going to be in SIAI’s best interests going forward.
Let’s be realistic here—the AGI research world is a small fringe element of AI research in general. The AGI research world generally has a high opinion of its own importance—an opinion not generally shared by the AI research world in general, or the world as a whole.
We are in a self-selected group of people who share our beliefs. This will bias our thinking, leading us to be too confident of our shared beliefs. We need to strive to counter that effect and keep a sense of perspective, particularly when we’re trying to anticipate what other people are likely to do.
I’m not sure I get what you’re saying.
Either the creation of smarter than human intelligence is the most powerful thing in the world, or it isn’t.
If it is, it would be surprising if nobody in the powerful organizations I’m talking about realizes it, especially if a few breakthroughs are made public and as we get closer to AGI.
If that is the case, this probably means that at some point AGI researchers will be “on the radar” of these people and that they should at least think about preparing for that day.
You can’t have your cake and eat it too; you can’t believe that AGI is the most important thing in the world and simultaneously think that it’s so unimportant that nobody’s going to bother with it.
I’m not saying that right now there is much danger of that . But if we can’t predict when AGI is going to happen (which means wide confidence bounds, 5 years to 100 years, as Eliezer once said.), then we don’t know how soon we should start thinking about security, which probably means that as soon as possible is best.
“If it is, it would be surprising if nobody in the powerful organizations I’m talking about realizes it, especially if a few breakthroughs are made public and as we get closer to AGI.”
Nobody in powerful organizations realizes important things all the time. To take a case study, in 2001, Warren Buffett and Ted Turner just happened to notice that there were hundreds of nukes in Russia, sitting around in lightly guarded or unguarded facilities, which anyone with a few AK-47s and a truck could have just walked in and stolen. They had to start their own organization, called the Nuclear Threat Initiative, to take care of the problem, because no one else was doing anything.
The existence of historical examples where people in powerful organizations failed to realize important things is not evidence that it is the norm or that it can be counted on with strong confidence.
Yes, it is. How could examples of X not be evidence that the “norm is X”? It may not be sufficiently strong evidence, but if this one example is not sufficiently damning, there are certainly plenty more.
Yes, of course it is weak evidence. But I can come up with a dozen examples off the top of my head where powerful organizations did realize important things, so you’re examples are very weak evidence that this behavior is the norm. So weak that it can be regarded as negligible.
Important things that weren’t recognized by the wider populace as important things? Do you have citations? Even for much more mundane things, governments routinely fail to either notice them, or to act once they have noticed. Eg., Chamberlain didn’t notice that Hitler wanted total control of Europe, even though he said so in his publicly-available book Mein Kampf. Stalin didn’t notice that Hitler was about to invade, even though he had numerous warnings from his subordinates.
The reason to point out the crackpot aspect (e.g. item 12 in Baez’s Crackpot Index) is to adjust how people think about this question, not to argue that the question shouldn’t be asked or answered.
In particular, I want people to balance (at least) two dangers—the danger of idea-stealing and the danger of insularity slowing down innovation.
It’s a marketing strategy by those involved. I am among those who are sceptical. Generality is implicit in the definition of “intelligence”.