[shortform—understanding why we think certain things is hard, and if writing it down helps me, maybe posting it will help someone else too]
I’ve been hearing a lot about “AI ethics” from various people and companies in the tech industry for a while now. This has always annoyed me for some ill-defined reason; I think I know why now. Allow me an analogy:
AI Safety is kinda like the FBI. It needs to be big, comprehensive, have many arms, and capable of addressing real threats that can cause significant damage to things that matter. A lot of these threats are hypothetical or abstract, and require foresight, prediction, and planning to contain.
AI Ethics is kinda like a mall cop. It needs to be visible, help shoppers feel good, and discourage the occasional malcontent from making a fuss.
So far, so good; we need both of these, or at least both have a place.
The problem is that AI Ethics is the Hot New Thing, and everyone has Jumped On The BandWagon, and if you don’t support AI Ethics there’s a social cost to pay. Meanwhile, AI Safety is dull, boring, and “look I don’t know why we even have those guys, it’s not like anyone is going to fly a 747 into a skyscraper any time soon”.
As a direct result of this, the tech industry appears to be allocating the FBI’s budget to the mall cop, and the mall cop’s budget to the FBI.
That’s what has been bothering me about AI Ethics.
AI Ethics != Ai Safety
[shortform—understanding why we think certain things is hard, and if writing it down helps me, maybe posting it will help someone else too]
I’ve been hearing a lot about “AI ethics” from various people and companies in the tech industry for a while now. This has always annoyed me for some ill-defined reason; I think I know why now. Allow me an analogy:
AI Safety is kinda like the FBI. It needs to be big, comprehensive, have many arms, and capable of addressing real threats that can cause significant damage to things that matter. A lot of these threats are hypothetical or abstract, and require foresight, prediction, and planning to contain.
AI Ethics is kinda like a mall cop. It needs to be visible, help shoppers feel good, and discourage the occasional malcontent from making a fuss.
So far, so good; we need both of these, or at least both have a place.
The problem is that AI Ethics is the Hot New Thing, and everyone has Jumped On The BandWagon, and if you don’t support AI Ethics there’s a social cost to pay. Meanwhile, AI Safety is dull, boring, and “look I don’t know why we even have those guys, it’s not like anyone is going to fly a 747 into a skyscraper any time soon”.
As a direct result of this, the tech industry appears to be allocating the FBI’s budget to the mall cop, and the mall cop’s budget to the FBI.
That’s what has been bothering me about AI Ethics.