I’m not sure quite what you mean by goals here. The most plausible interpretation I can offer is that:
Goals are the drives that cause behavior. “Because of goals X and Y” is an answer to “Why did you do Z?” (and not an answer to “Why should you do Z?”).
In this case, we ought to adopt the set of goals that (through the actions they cause) maximize our expected utility. Our utility function needn’t mention goal-achievement specifically; goals are just the way it gets implemented. Acquiring a goal uncorrelated with our utility function is bad, because value is fragile.
It’s not that the causes of the goal “get out of pain” are bad; it’s that the consequences might be. For a wide range of utility functions (most of which make no explicit mention of pain), a system that provided information about damage without otherwise altering the decision-making process would be more useful.
I’m not sure quite what you mean by goals here. The most plausible interpretation I can offer is that:
Goals are the drives that cause behavior. “Because of goals X and Y” is an answer to “Why did you do Z?” (and not an answer to “Why should you do Z?”).
In this case, we ought to adopt the set of goals that (through the actions they cause) maximize our expected utility. Our utility function needn’t mention goal-achievement specifically; goals are just the way it gets implemented. Acquiring a goal uncorrelated with our utility function is bad, because value is fragile.
It’s not that the causes of the goal “get out of pain” are bad; it’s that the consequences might be. For a wide range of utility functions (most of which make no explicit mention of pain), a system that provided information about damage without otherwise altering the decision-making process would be more useful.