alternatively for any ε you set, it will be profitable for the AI to create a new version of itself with the same utility function and ε′=ε/2 than give the new AI all the resources it has and commit suicide.
This doesn’t seem true. A ε/2 AI will take risks, looking for higher utility, that the ε AI wouldn’t.
This doesn’t seem true. A ε/2 AI will take risks, looking for higher utility, that the ε AI wouldn’t.