This post contains no new insights; it just puts together some old insights in a format I hope is clearer.
Most satisficers are unoptimised (above the satisficing level): they have a limited drive to optimise and transform the universe. They may still end up optimising the universe anyway: they have no penalty for doing so (and sometimes it’s a good idea for them). But if they can lazily achieve their goal, then they’re ok with that too. So they simply have low optimisation pressure.
A safe “satisficer” design (or a reduced impact AI design) needs to be not only un-optimised, but specifically anti-optimised. It has to be setup so that “go out and optimise the universe” scores worse that “be lazy and achieve your goal”. The problem is that these terms are undefined (as usual), that there are many minor actions that can optimise the universe (such as creating a subagent), and the approach has to be safe against all possible ways of optimising the universe—not just the “maximise u” for a specific and known u.
That’s why the reduced impact/safe satisficer/anti-optimised designs are so hard: you have to add a very precise yet general (anti-)optimising pressure, rather than simply removing the current optimising pressure.
Un-optimised vs anti-optimised
A putative new idea for AI control; index here.
This post contains no new insights; it just puts together some old insights in a format I hope is clearer.
Most satisficers are unoptimised (above the satisficing level): they have a limited drive to optimise and transform the universe. They may still end up optimising the universe anyway: they have no penalty for doing so (and sometimes it’s a good idea for them). But if they can lazily achieve their goal, then they’re ok with that too. So they simply have low optimisation pressure.
A safe “satisficer” design (or a reduced impact AI design) needs to be not only un-optimised, but specifically anti-optimised. It has to be setup so that “go out and optimise the universe” scores worse that “be lazy and achieve your goal”. The problem is that these terms are undefined (as usual), that there are many minor actions that can optimise the universe (such as creating a subagent), and the approach has to be safe against all possible ways of optimising the universe—not just the “maximise u” for a specific and known u.
That’s why the reduced impact/safe satisficer/anti-optimised designs are so hard: you have to add a very precise yet general (anti-)optimising pressure, rather than simply removing the current optimising pressure.