My concern with conflating those two definitions of alignment is largely with the degree of reliability that’s relevant.
The definition “does what the developer wanted” seems like it could cash out as something like “x% of the responses are good”. So, if 99.7% of responses are “good”, it’s “99.7% aligned”. You could even strengthen that as something like “99.7% aligned against adversarial prompting”.
On the other hand, from a safety perspective, the relevant metric is something more like “probabilistic confidence that it’s aligned against any input”. So “99.7% aligned” means something more like “99.7% chance that it will always be safe, regardless of who provides the inputs, how many inputs they provide, and how adversarial they are”.
In the former case, that sounds like a horrifyingly low number. What do you mean we only get to ask the AI 300 things in total before everyone dies? How is that possibly a good situation to be in? But in the latter case, I would roll those dice in a heartbeat if I could be convinced the odds were justified.
So anyway, I still object to using the “alignment” term to cover both situations.
My concern with conflating those two definitions of alignment is largely with the degree of reliability that’s relevant.
The definition “does what the developer wanted” seems like it could cash out as something like “x% of the responses are good”. So, if 99.7% of responses are “good”, it’s “99.7% aligned”. You could even strengthen that as something like “99.7% aligned against adversarial prompting”.
On the other hand, from a safety perspective, the relevant metric is something more like “probabilistic confidence that it’s aligned against any input”. So “99.7% aligned” means something more like “99.7% chance that it will always be safe, regardless of who provides the inputs, how many inputs they provide, and how adversarial they are”.
In the former case, that sounds like a horrifyingly low number. What do you mean we only get to ask the AI 300 things in total before everyone dies? How is that possibly a good situation to be in? But in the latter case, I would roll those dice in a heartbeat if I could be convinced the odds were justified.
So anyway, I still object to using the “alignment” term to cover both situations.