I’m not worried about the sort of person who would become a terrorist. Usually, they just have a goal like political change, and
are willing to kill for it.
Instead, I’m worried about the sort of person who become a mass-shooter or
serial killer.
I’m worried about people who value hurting others for its
own sake.
If a terrorist group took control of AGI, then things might not be too
bad. I think most terrorists don’t want to damage the world, they just
want their political change. So they could just use their AGI to enact
whatever political or other changes they want, and after that not be evil.
But if someone who just terminally values harming others, look a
mass-shooter, took over the world, things would probably be much worse.
Could you clarify what you’re thinking of when saying “so any prospective murderer who was “malicious [and] willing to incur large personal costs to cause large amounts of suffering” would already have far better options than a mass shooting”? What other, better options would they have that the don’t do?
Instead, I’m worried about the sort of person who become a mass-shooter or serial killer. … I’m worried about people who value hurting others for its own sake.
Empirically, almost or actually no mass-shooters (or serial killers) have this kind of abstract and scope-insensitive motivation. Look at this writeup of a DoJ study: it’s almost always a specific combination of a violent and traumatic background, a short-term crisis period, and ready access to firearms.
I think the efforts to focus on issues of ‘Mental Health’ pay only lip service to this point. We live in a culture which relies on male culture to be about learning to traumatize others and learning to tolerate trauma, while at the same time decrying it as toxic male culture. Males are rightly confused these days, and the lack of adequate social services combined with a country filled with guns that continues to promote media of all sorts that celebrates violence as long as it’s ‘good violence’, is a recipe for this kind of tragedy. Focusing on the individual shooters as being the problem isn’t the answer. It is a systemic problem I believe.
Even if there’s just one such person, I think that one person still has a significant chance of succeeding.
However, more importantly, I don’t see how we could rule out that there are people who want to cause widespread destruction and are willing to sacrifice things for it, even if they wouldn’t be interested in being a serial killer or mass shooter.
I mean, I don’t see how we have any data. I think that for almost all of history, there has been little opportunity for a single individual to cause world-level destruction. Maybe during the time around the Cold War someone could manage to trick the USSR and USA to start a nuclear war. Other than that, I can’t think of much other opportunities.
There are eight billion people in the world, and potentially all it would take is one, with sufficient motivation, to bring a about a really bad outcome. Given we need a conjunction with eight billion, I think it would be hard to show that there is no such person.
So I’m still quite concerned about malicious non-state actors.
And I think there are some reasonably doable, reasonably low-cost things someone could do about this. Potentially just having very thorough security clearance before allowing someone to work on AGI-related stuff could make a big difference. And increasing there physical security of the AGI organization could also be helpful. But currently, I don’t think people at Google and other AI place is worrying about this. We could at least tell them about this.
I’m not worried about the sort of person who would become a terrorist. Usually, they just have a goal like political change, and are willing to kill for it. Instead, I’m worried about the sort of person who become a mass-shooter or serial killer.
I’m worried about people who value hurting others for its own sake. If a terrorist group took control of AGI, then things might not be too bad. I think most terrorists don’t want to damage the world, they just want their political change. So they could just use their AGI to enact whatever political or other changes they want, and after that not be evil. But if someone who just terminally values harming others, look a mass-shooter, took over the world, things would probably be much worse.
Could you clarify what you’re thinking of when saying “so any prospective murderer who was “malicious [and] willing to incur large personal costs to cause large amounts of suffering” would already have far better options than a mass shooting”? What other, better options would they have that the don’t do?
Sorry, this is infohazard. You don’t want someone to read this text and think “actually, this is a cool idea”.
Empirically, almost or actually no mass-shooters (or serial killers) have this kind of abstract and scope-insensitive motivation. Look at this writeup of a DoJ study: it’s almost always a specific combination of a violent and traumatic background, a short-term crisis period, and ready access to firearms.
I think the efforts to focus on issues of ‘Mental Health’ pay only lip service to this point. We live in a culture which relies on male culture to be about learning to traumatize others and learning to tolerate trauma, while at the same time decrying it as toxic male culture. Males are rightly confused these days, and the lack of adequate social services combined with a country filled with guns that continues to promote media of all sorts that celebrates violence as long as it’s ‘good violence’, is a recipe for this kind of tragedy. Focusing on the individual shooters as being the problem isn’t the answer. It is a systemic problem I believe.
This is a good point. I didn’t know this. I really should have researched things more.
Even if there’s just one such person, I think that one person still has a significant chance of succeeding.
However, more importantly, I don’t see how we could rule out that there are people who want to cause widespread destruction and are willing to sacrifice things for it, even if they wouldn’t be interested in being a serial killer or mass shooter.
I mean, I don’t see how we have any data. I think that for almost all of history, there has been little opportunity for a single individual to cause world-level destruction. Maybe during the time around the Cold War someone could manage to trick the USSR and USA to start a nuclear war. Other than that, I can’t think of much other opportunities.
There are eight billion people in the world, and potentially all it would take is one, with sufficient motivation, to bring a about a really bad outcome. Given we need a conjunction with eight billion, I think it would be hard to show that there is no such person.
So I’m still quite concerned about malicious non-state actors.
And I think there are some reasonably doable, reasonably low-cost things someone could do about this. Potentially just having very thorough security clearance before allowing someone to work on AGI-related stuff could make a big difference. And increasing there physical security of the AGI organization could also be helpful. But currently, I don’t think people at Google and other AI place is worrying about this. We could at least tell them about this.