Well then the answer is simple: Instead of setting the goal as doing what you would do at that specific point in time (which might actually work, assuming that you didn’t want your will to be modified to want something thats cheap), you set it to do what it thinks you, at the time you created it would want. If you assume that the AI would know you would want it to do what the you in the future would want it to do, but not sickly modify you to want weird things (like death which is the cheapest thing.) Problem solved, although your AI is going to have to have alot of background knowlege and intelligence to actually pull this off.
Well then the answer is simple: Instead of setting the goal as doing what you would do at that specific point in time (which might actually work, assuming that you didn’t want your will to be modified to want something thats cheap), you set it to do what it thinks you, at the time you created it would want. If you assume that the AI would know you would want it to do what the you in the future would want it to do, but not sickly modify you to want weird things (like death which is the cheapest thing.) Problem solved, although your AI is going to have to have alot of background knowlege and intelligence to actually pull this off.