A Go AI that learns to play go via reinforcement learning might not “have a utility function that only cares about winning Go”. Using standard utility theory, you could observe its actions and try to rationalise them as if they were maximising some utility function, and the utility function you come up with probably wouldn’t be “win every game of Go you start playing” (what you actually come up with will depend, presumably, on algorithmic and training regime details). The reason why the utility function is slippery is that it’s fundamentally an adaptation executor, not a utility maxmiser.
A Go AI that learns to play go via reinforcement learning might not “have a utility function that only cares about winning Go”. Using standard utility theory, you could observe its actions and try to rationalise them as if they were maximising some utility function, and the utility function you come up with probably wouldn’t be “win every game of Go you start playing” (what you actually come up with will depend, presumably, on algorithmic and training regime details). The reason why the utility function is slippery is that it’s fundamentally an adaptation executor, not a utility maxmiser.