If the AI is self-aware and self-modifying, it should realize that E means losing out on all future utility from A and self-modify it’s utility for E either down [...] or up [...]
There is no “future utility” to lose for E. The utility of E is precisely the expected future utility of A.
The AI has no concept of effort, other than that derived from its utility function.
The best idea is to be explicit about the problem; write down the situations, or the algorithm, that would lead to the AI modifying itself in this way, and we can see if it’s a problem.
There is no “future utility” to lose for E. The utility of E is precisely the expected future utility of A.
The AI has no concept of effort, other than that derived from its utility function.
The best idea is to be explicit about the problem; write down the situations, or the algorithm, that would lead to the AI modifying itself in this way, and we can see if it’s a problem.