A fair point, though that mindset is hacker-like in nature. It is, basically, an automatic “how can I break or subvert this system?” reaction to everything.
But the thing is, computer security is an intensely practical field. It’s very much like engineering: has to be realistic/implementable, bad things happen if it fucks up, people pay a lot of money to get good solutions, these solutions are often specific to the circumstances, etc.
AI safety research at the moment is very far from this.
A fair point, though that mindset is hacker-like in nature. It is, basically, an automatic “how can I break or subvert this system?” reaction to everything.
But the thing is, computer security is an intensely practical field. It’s very much like engineering: has to be realistic/implementable, bad things happen if it fucks up, people pay a lot of money to get good solutions, these solutions are often specific to the circumstances, etc.
AI safety research at the moment is very far from this.