Let’s be realistic here—the AGI research world is a small fringe element of AI research in general. The AGI research world generally has a high opinion of its own importance—an opinion not generally shared by the AI research world in general, or the world as a whole.
We are in a self-selected group of people who share our beliefs. This will bias our thinking, leading us to be too confident of our shared beliefs. We need to strive to counter that effect and keep a sense of perspective, particularly when we’re trying to anticipate what other people are likely to do.
Either the creation of smarter than human intelligence is the most powerful thing in the world, or it isn’t.
If it is, it would be surprising if nobody in the powerful organizations I’m talking about realizes it, especially if a few breakthroughs are made public and as we get closer to AGI.
If that is the case, this probably means that at some point AGI researchers will be “on the radar” of these people and that they should at least think about preparing for that day.
You can’t have your cake and eat it too; you can’t believe that AGI is the most important thing in the world and simultaneously think that it’s so unimportant that nobody’s going to bother with it.
I’m not saying that right now there is much danger of that . But if we can’t predict when AGI is going to happen (which means wide confidence bounds, 5 years to 100 years, as Eliezer once said.), then we don’t know how soon we should start thinking about security, which probably means that as soon as possible is best.
“If it is, it would be surprising if nobody in the powerful organizations I’m talking about realizes it, especially if a few breakthroughs are made public and as we get closer to AGI.”
Nobody in powerful organizations realizes important things all the time. To take a case study, in 2001, Warren Buffett and Ted Turner just happened to notice that there were hundreds of nukes in Russia, sitting around in lightly guarded or unguarded facilities, which anyone with a few AK-47s and a truck could have just walked in and stolen. They had to start their own organization, called the Nuclear Threat Initiative, to take care of the problem, because no one else was doing anything.
The existence of historical examples where people in powerful organizations failed to realize important things is not evidence that it is the norm or that it can be counted on with strong confidence.
Yes, it is. How could examples of X not be evidence that the “norm is X”? It may not be sufficiently strong evidence, but if this one example is not sufficiently damning, there are certainlyplentymore.
Yes, of course it is weak evidence. But I can come up with a dozen examples off the top of my head where powerful organizations did realize important things, so you’re examples are very weak evidence that this behavior is the norm. So weak that it can be regarded as negligible.
Important things that weren’t recognized by the wider populace as important things? Do you have citations? Even for much more mundane things, governments routinely fail to either notice them, or to act once they have noticed. Eg., Chamberlain didn’t notice that Hitler wanted total control of Europe, even though he said so in his publicly-available book Mein Kampf. Stalin didn’t notice that Hitler was about to invade, even though he had numerous warnings from his subordinates.
The reason to point out the crackpot aspect (e.g. item 12 in Baez’s Crackpot Index) is to adjust how people think about this question, not to argue that the question shouldn’t be asked or answered.
In particular, I want people to balance (at least) two dangers—the danger of idea-stealing and the danger of insularity slowing down innovation.
Let’s be realistic here—the AGI research world is a small fringe element of AI research in general. The AGI research world generally has a high opinion of its own importance—an opinion not generally shared by the AI research world in general, or the world as a whole.
We are in a self-selected group of people who share our beliefs. This will bias our thinking, leading us to be too confident of our shared beliefs. We need to strive to counter that effect and keep a sense of perspective, particularly when we’re trying to anticipate what other people are likely to do.
I’m not sure I get what you’re saying.
Either the creation of smarter than human intelligence is the most powerful thing in the world, or it isn’t.
If it is, it would be surprising if nobody in the powerful organizations I’m talking about realizes it, especially if a few breakthroughs are made public and as we get closer to AGI.
If that is the case, this probably means that at some point AGI researchers will be “on the radar” of these people and that they should at least think about preparing for that day.
You can’t have your cake and eat it too; you can’t believe that AGI is the most important thing in the world and simultaneously think that it’s so unimportant that nobody’s going to bother with it.
I’m not saying that right now there is much danger of that . But if we can’t predict when AGI is going to happen (which means wide confidence bounds, 5 years to 100 years, as Eliezer once said.), then we don’t know how soon we should start thinking about security, which probably means that as soon as possible is best.
“If it is, it would be surprising if nobody in the powerful organizations I’m talking about realizes it, especially if a few breakthroughs are made public and as we get closer to AGI.”
Nobody in powerful organizations realizes important things all the time. To take a case study, in 2001, Warren Buffett and Ted Turner just happened to notice that there were hundreds of nukes in Russia, sitting around in lightly guarded or unguarded facilities, which anyone with a few AK-47s and a truck could have just walked in and stolen. They had to start their own organization, called the Nuclear Threat Initiative, to take care of the problem, because no one else was doing anything.
The existence of historical examples where people in powerful organizations failed to realize important things is not evidence that it is the norm or that it can be counted on with strong confidence.
Yes, it is. How could examples of X not be evidence that the “norm is X”? It may not be sufficiently strong evidence, but if this one example is not sufficiently damning, there are certainly plenty more.
Yes, of course it is weak evidence. But I can come up with a dozen examples off the top of my head where powerful organizations did realize important things, so you’re examples are very weak evidence that this behavior is the norm. So weak that it can be regarded as negligible.
Important things that weren’t recognized by the wider populace as important things? Do you have citations? Even for much more mundane things, governments routinely fail to either notice them, or to act once they have noticed. Eg., Chamberlain didn’t notice that Hitler wanted total control of Europe, even though he said so in his publicly-available book Mein Kampf. Stalin didn’t notice that Hitler was about to invade, even though he had numerous warnings from his subordinates.
The reason to point out the crackpot aspect (e.g. item 12 in Baez’s Crackpot Index) is to adjust how people think about this question, not to argue that the question shouldn’t be asked or answered.
In particular, I want people to balance (at least) two dangers—the danger of idea-stealing and the danger of insularity slowing down innovation.
It’s a marketing strategy by those involved. I am among those who are sceptical. Generality is implicit in the definition of “intelligence”.