The problem here is that the set of all possible commands for which I can’t (by that definition) be maximally rewarded is so vast that the statement “if someone maximally rewards/punishes you, their orders are your purpose of life” becomes meaningless.
Not true, as the reward could include all of the unwanted consequences of following the command being divinely reverted a fraction of a second later.
That wouldn’t help. Then the utility would be calculated from (getting two golden bricks) and (murdering my child for a fraction of a second), which still brings lower utility than not following the command.
The set of possible commands for which I can’t be maximally rewarded still remains too vast for the statement to be meaningful.
This sounds absurd to me. Unless of course you’re taking the “two golden bricks” literally, in which case I invite you to substitute it by “saving 1 billion other lives” and seeing if your position still stands.
Not true, as the reward could include all of the unwanted consequences of following the command being divinely reverted a fraction of a second later.
That wouldn’t help. Then the utility would be calculated from (getting two golden bricks) and (murdering my child for a fraction of a second), which still brings lower utility than not following the command.
The set of possible commands for which I can’t be maximally rewarded still remains too vast for the statement to be meaningful.
This sounds absurd to me. Unless of course you’re taking the “two golden bricks” literally, in which case I invite you to substitute it by “saving 1 billion other lives” and seeing if your position still stands.