Subagents and impact measures: summary tables
These tables will summarise the results of this whole sequence, checking whether subagents can neutralise the impact penalty.
First of all, given a subagent, here are the results for various impact penalties and baselines, and various “value difference summary functions” :
Another way of phrasing ” decreasing”: it penalises too little power, not too much. Conversely, ” increasing” penalises too much power, not too little. Thus, unfortunately:
Subagents do allow an agent to get stronger than the indexical impact penalty would allow.
Subagents don’t allow an agent to get weaker than the indexical impact penalty would allow.
Examples
This table presents, for three specific examples, whether they could actually build a subagent, and whether that would neutralise their impact penalty in practice (in the inaction baseline):
Here, 20BQ is twenty billion questions, RR is relative reachability, and AU is attainable utility.
Now, whether the RR or AU penalties are undermined technically depends on , not on what measure is being used for value. However, I feel that the results undermine the spirit of AU much more than the spirit of RR. AU attempted to control an agent by limiting its power; this effect is mainly neutralised. RR attempted to control the side-effects of an agent by ensuring it had enough power to reach a lot of states; this effect is not neutralised by a subagent.
- Building and using the subagent by 12 Feb 2020 19:28 UTC; 17 points) (
- In theory: does building the subagent have an “impact”? by 13 Feb 2020 14:17 UTC; 17 points) (
- Counterfactuals versus the laws of physics by 18 Feb 2020 13:21 UTC; 16 points) (
- Stepwise inaction and non-indexical impact measures by 17 Feb 2020 10:32 UTC; 12 points) (
- Appendix: mathematics of indexical impact measures by 17 Feb 2020 13:22 UTC; 12 points) (
- (In)action rollouts by 18 Feb 2020 14:48 UTC; 11 points) (
- 19 May 2020 14:21 UTC; 4 points) 's comment on Conclusion to ‘Reframing Impact’ by (
- 17 Feb 2020 14:14 UTC; 2 points) 's comment on In theory: does building the subagent have an “impact”? by (
Things might get complicated by partial observability; in the real world, the agent is minimizing change in its beliefs about what it can reach. Otherwise, you could just get around the SA problem for AUP as well by substituting the reward functions for state indicator reward functions.
AU and RR have the same SA problem, formally, in terms of excess power; it’s just that AU wants low power and RR wants high power, so they don’t have the same problem in practice.