What does it imply for things like AI governance and global coordination on x-risks?
I’ve read the article a while ago, and vaguely concluded there should be some implications here (but largely uncertain about the direction or magnitude, being a non-expert). Interested to hear what people think (esp. people who concentrate on policy)
I’ve read the article a while ago, and vaguely concluded there should be some implications here (but largely uncertain about the direction or magnitude, being a non-expert). Interested to hear what people think (esp. people who concentrate on policy)