I suspect that constraining a superintelligence from creating subagents will be much harder than designing AI control methods that leave no incentive to subvert them through creation of subagents.
I suspect so to. Still, worth a bit of thinking about.
Current theme: default
Less Wrong (text)
Less Wrong (link)
I suspect that constraining a superintelligence from creating subagents will be much harder than designing AI control methods that leave no incentive to subvert them through creation of subagents.
I suspect so to. Still, worth a bit of thinking about.