I don’t understand why it’s plausible to think that AI’s might collectively have different goals than humans.
Future posts, right? We’re assuming that premise here:
So, for what follows, let’s proceed from the premise: “For some weird reason, humans consistently design AI systems (with human-like research and planning abilities) that coordinate with each other to try and overthrow humanity.” Then what? What follows will necessarily feel wacky to people who find this hard to imagine, but I think it’s worth playing along, because I think “we’d be in trouble if this happened” is a very important point.
Future posts, right? We’re assuming that premise here: