I don’t quite understand that argument, maybe someone could elaborate.
I think the idea is that if I make a perfectly safe AI by constraining it in some way, that doesn’t prevent someone else from making an unsafe AI and killing us all.
I think the idea is that if I make a perfectly safe AI by constraining it in some way, that doesn’t prevent someone else from making an unsafe AI and killing us all.