Even if there’s just one such person, I think that one person still has a significant chance of succeeding.
However, more importantly, I don’t see how we could rule out that there are people who want to cause widespread destruction and are willing to sacrifice things for it, even if they wouldn’t be interested in being a serial killer or mass shooter.
I mean, I don’t see how we have any data. I think that for almost all of history, there has been little opportunity for a single individual to cause world-level destruction. Maybe during the time around the Cold War someone could manage to trick the USSR and USA to start a nuclear war. Other than that, I can’t think of much other opportunities.
There are eight billion people in the world, and potentially all it would take is one, with sufficient motivation, to bring a about a really bad outcome. Given we need a conjunction with eight billion, I think it would be hard to show that there is no such person.
So I’m still quite concerned about malicious non-state actors.
And I think there are some reasonably doable, reasonably low-cost things someone could do about this. Potentially just having very thorough security clearance before allowing someone to work on AGI-related stuff could make a big difference. And increasing there physical security of the AGI organization could also be helpful. But currently, I don’t think people at Google and other AI place is worrying about this. We could at least tell them about this.
Even if there’s just one such person, I think that one person still has a significant chance of succeeding.
However, more importantly, I don’t see how we could rule out that there are people who want to cause widespread destruction and are willing to sacrifice things for it, even if they wouldn’t be interested in being a serial killer or mass shooter.
I mean, I don’t see how we have any data. I think that for almost all of history, there has been little opportunity for a single individual to cause world-level destruction. Maybe during the time around the Cold War someone could manage to trick the USSR and USA to start a nuclear war. Other than that, I can’t think of much other opportunities.
There are eight billion people in the world, and potentially all it would take is one, with sufficient motivation, to bring a about a really bad outcome. Given we need a conjunction with eight billion, I think it would be hard to show that there is no such person.
So I’m still quite concerned about malicious non-state actors.
And I think there are some reasonably doable, reasonably low-cost things someone could do about this. Potentially just having very thorough security clearance before allowing someone to work on AGI-related stuff could make a big difference. And increasing there physical security of the AGI organization could also be helpful. But currently, I don’t think people at Google and other AI place is worrying about this. We could at least tell them about this.