e.g. if it’s clear that the AI could not pull off the cybersecurity operation required to not be easily shut down then this is a fairly strong argument that this AI agent wouldn’t be able to pose a threat
I agree that this is a fairly strong argument that this AI agent wouldn’t be able be able cause problems while rogue. However, I think there is also a concern that this AI will be able to cause serious problems via the affordances it is granted through the AI lab.
In particular, AIs might be given huge affordances internally with minimal controls by default. And this might pose a substantial risk even if AIs aren’t quite capable enough to pull off the cybersecurity operation. (Though it seems not that bad of a risk.)
Yeah that’s right, I made too broad a claim and only meant to say it was an argument against their ability to pose a threat as rogue independent agents.
I agree that this is a fairly strong argument that this AI agent wouldn’t be able be able cause problems while rogue. However, I think there is also a concern that this AI will be able to cause serious problems via the affordances it is granted through the AI lab.
In particular, AIs might be given huge affordances internally with minimal controls by default. And this might pose a substantial risk even if AIs aren’t quite capable enough to pull off the cybersecurity operation. (Though it seems not that bad of a risk.)
Yeah that’s right, I made too broad a claim and only meant to say it was an argument against their ability to pose a threat as rogue independent agents.