There are now at least two AI development companies purporting to research safe AI and both have existed for a couple years now, so I think it’s worth taking another look at how safe “safe” AI development is.
One of those companies is GoodAI. I don’t know a lot about them beyond what’s on their website. They seem clearly to be aware of AI safety concerns and the need for alignment but are also pursuing capabilities research. OpenAI is the other and is similarly pursuing capabilities research but, at least based on what I know about them publicly, only go so far as to say they want safe AI, although OpenAI does employ at least one person known to be actively working on AI safety. There may be other companies in the safe AI development space but to the best of my knowledge other efforts do not make explicit statements about safety and are only focused on capabilities (although AI for self-driving cars, for example, has certain mundane safety concerns different from those of AI safety research).
When OpenAI was started there was some discussion about whether it was a good idea. Ben Hoffman said no, Nate Soares had a positive reaction, and others had mixed responses. Some quick searching hasn’t turned up any explicit opinions on GoodAI, so presumably people would feel similarly about them as they do about OpenAI. A quick rehash on some of the arguments:
No AI is safe AI until we are sure we can align it.
Capabilities research is in conflict with safety research so no capabilities effort can be trusted to be fully committed to safety.
If you are going to research AI anyway, better to try to be safe than not at all.
More people working on safety is always better, and the associated capabilities research these companies are doing would have happened anyway.
We’re a couple years on now, though, and given that AI seems to be on a strong upward swing, does it make sense to encourage more companies to be like OpenAI and GoodAI and target safe AI, perhaps via a self-regulatory organization, or is encouraging safety as an explicit goal in capabilities research unlikely to have much effect?
I’m inclined to suspect that some attention to safety is better than none because it gives a wedge with which to push for more safety later, so I’m especially curious about arguments that it wouldn’t help.
How safe “safe” AI development?
There are now at least two AI development companies purporting to research safe AI and both have existed for a couple years now, so I think it’s worth taking another look at how safe “safe” AI development is.
One of those companies is GoodAI. I don’t know a lot about them beyond what’s on their website. They seem clearly to be aware of AI safety concerns and the need for alignment but are also pursuing capabilities research. OpenAI is the other and is similarly pursuing capabilities research but, at least based on what I know about them publicly, only go so far as to say they want safe AI, although OpenAI does employ at least one person known to be actively working on AI safety. There may be other companies in the safe AI development space but to the best of my knowledge other efforts do not make explicit statements about safety and are only focused on capabilities (although AI for self-driving cars, for example, has certain mundane safety concerns different from those of AI safety research).
When OpenAI was started there was some discussion about whether it was a good idea. Ben Hoffman said no, Nate Soares had a positive reaction, and others had mixed responses. Some quick searching hasn’t turned up any explicit opinions on GoodAI, so presumably people would feel similarly about them as they do about OpenAI. A quick rehash on some of the arguments:
No AI is safe AI until we are sure we can align it.
Capabilities research is in conflict with safety research so no capabilities effort can be trusted to be fully committed to safety.
If you are going to research AI anyway, better to try to be safe than not at all.
More people working on safety is always better, and the associated capabilities research these companies are doing would have happened anyway.
We’re a couple years on now, though, and given that AI seems to be on a strong upward swing, does it make sense to encourage more companies to be like OpenAI and GoodAI and target safe AI, perhaps via a self-regulatory organization, or is encouraging safety as an explicit goal in capabilities research unlikely to have much effect?
I’m inclined to suspect that some attention to safety is better than none because it gives a wedge with which to push for more safety later, so I’m especially curious about arguments that it wouldn’t help.