Maybe this have been said before, but here is a simple idea:
Directly specify a utility function U which you are not sure about, but also discount AI’s own power as part of it. So the new utility function is U—power(AI), where power is a fast growing function of a mix of AI’s source code complexity, intelligence, hardware, electricity costs. One needs to be careful of how to define “self” in this case, as a careful redefinition by the AI will remove the controls.
One also needs to consider the creation of subagents with proper utilities as well, since in a naive implementation, sub-agents will just optimize U, without restrictions.
This is likely not enough, but has the advantage that the AI does not have a will to become stronger a priori, which is better than boxing an AI which does.
hmm, looks like the year is wrong and the delete button has failed to work :(