Suppose you made your dataset larger and larger. Once it got “really large” let’s say, would you feel confident that your AI model will have learned enough such that even if its dataset contained nuke-building instructions, it would remain safe to use even in the hands of a bad actor?
Suppose you made your dataset larger and larger. Once it got “really large” let’s say, would you feel confident that your AI model will have learned enough such that even if its dataset contained nuke-building instructions, it would remain safe to use even in the hands of a bad actor?