Do you think risks like “the AI in a box boxes you” will become realistic one day?
Edit: it seems pretty preventable, but maybe people in AI labs should be trained about it once AI becomes very powerful?
Do you think risks like “the AI in a box boxes you” will become realistic one day?
Edit: it seems pretty preventable, but maybe people in AI labs should be trained about it once AI becomes very powerful?