A ‘reason for action’ is the standard term in Anglophone philosophy for a source of normativity of any kind. For example, a desire is the source of normativity in a hypothetical imperative. Others have proposed that categorical imperatives exist, and provide reasons for action apart from desires. Some have proposed that divine commands exist, and are sources of normativity apart from desires. Others have proposed that certain objects or states of affairs can ground normativity intrinsically—i.e. that they have intrinsic value apart from being valued by an agent.
Okay, but all of those (to the extent that they’re coherent) are observations about human axiology. Beware of committing the mind projection fallacy with respect to compellingness — you find those to be plausible sources of normativity because your brain is that of “a particular species of primate on planet Earth”. If your AI were looking for “reasons for action” that would compel all agents, it would find nothing, and if it were looking for all of the “reasons for action” that would compel each possible agent, it would spend an infinite amount of time enumerating stupid pointless motivations. It would eventually notice categorical imperatives, fairness, compassion, etc. but it would also notice drives based on the phase of the moon, based on the extrapolated desires of submarines (according to any number of possible submarine-volition-extrapolating dynamics), based on looking at how people would want to be treated and reversing that, based on the number of living cats in the world modulo 241, based on modeling people as potted plants and considering the direction their leaves are waving...
Okay, but all of those (to the extent that they’re coherent) are observations about human axiology. Beware of committing the mind projection fallacy with respect to compellingness — you find those to be plausible sources of normativity because your brain is that of “a particular species of primate on planet Earth”. If your AI were looking for “reasons for action” that would compel all agents, it would find nothing, and if it were looking for all of the “reasons for action” that would compel each possible agent, it would spend an infinite amount of time enumerating stupid pointless motivations. It would eventually notice categorical imperatives, fairness, compassion, etc. but it would also notice drives based on the phase of the moon, based on the extrapolated desires of submarines (according to any number of possible submarine-volition-extrapolating dynamics), based on looking at how people would want to be treated and reversing that, based on the number of living cats in the world modulo 241, based on modeling people as potted plants and considering the direction their leaves are waving...