Decoupled AI 4: figure out which action will reach the goal, without affecting outside world (low-impact AI)
I don’t think that low impact is decoupled, and it might be misleading to view them from that frame / lend a false sense of security. The policy is still very much shaped by utility, unlike approval.
Decoupled AI 4: figure out which action will reach the goal, without affecting outside world (low-impact AI)
I don’t think that low impact is decoupled, and it might be misleading to view them from that frame / lend a false sense of security. The policy is still very much shaped by utility, unlike approval.