Thinking of corrigibility, it’s not clear to me that non-obstruction is quite what I want. Perhaps a closer version would be something like: A non-obstructive AI on S needs to do no worse for each P in S than pol(P | off & humans have all the AI’s knowledge)
This feels a bit patchy, but in principle it’d fix the most common/obvious issue of the kind I’m raising: that the AI would often otherwise have an incentive to hide information from the users so as to avoid ‘obstructing’ them when they change their minds.
I think this is more in the spirit of non-obstruction, since it compares the AI’s actions to a fully informed human baseline (I’m not claiming it’s precise, but in the direction that makes sense to me). Perhaps the extra information does smooth out any undesirable spikes the AI might anticipate.
I do otherwise expect such issues to be common. But perhaps it’s usually about the AI knowing more than the humans.
I may well be wrong about any/all of this, but (unless I’m confused), it’s not a quibble about edge cases. If I’m wrong about default spikiness, then it’s much more of an edge case.
(You’re right about my P, -P example missing your main point; I just meant it as an example, not as a response to the point you were making with it; I should have realised that would make my overall point less clear, given that interpreting it as a direct response was natural; apologies if that seemed less-than-constructive: not my intent)
Thinking of corrigibility, it’s not clear to me that non-obstruction is quite what I want.
Perhaps a closer version would be something like:
A non-obstructive AI on S needs to do no worse for each P in S than pol(P | off & humans have all the AI’s knowledge)
This feels a bit patchy, but in principle it’d fix the most common/obvious issue of the kind I’m raising: that the AI would often otherwise have an incentive to hide information from the users so as to avoid ‘obstructing’ them when they change their minds.
I think this is more in the spirit of non-obstruction, since it compares the AI’s actions to a fully informed human baseline (I’m not claiming it’s precise, but in the direction that makes sense to me). Perhaps the extra information does smooth out any undesirable spikes the AI might anticipate.
I do otherwise expect such issues to be common.
But perhaps it’s usually about the AI knowing more than the humans.
I may well be wrong about any/all of this, but (unless I’m confused), it’s not a quibble about edge cases.
If I’m wrong about default spikiness, then it’s much more of an edge case.
(You’re right about my P, -P example missing your main point; I just meant it as an example, not as a response to the point you were making with it; I should have realised that would make my overall point less clear, given that interpreting it as a direct response was natural; apologies if that seemed less-than-constructive: not my intent)