It definitely does not imply that in a general case. There are plenty of counter-examples where agents self-terminate or are indifferent to continuing to exist, for a variety of reasons. Happens in humans, happens in various animals, I do not see why it would be excluded in AI.
To be specific, a human willing to sacrifice their life for some goal (a volunteer soldier dying for their country, a mother saving her children from danger) obviously has agency.
On the other hand, these are not cases of indifference to their continued existence, but rather willingness to trade it for some goal.
A better example of “agenty, but not caring about self-preservation” humans might be a drug addict trying to get their drug but not caring about death from overdose, or a conscientious depressed person who is seriously contemplating their suicide while also performing their duties until the very last moment.
It definitely does not imply that in a general case. There are plenty of counter-examples where agents self-terminate or are indifferent to continuing to exist, for a variety of reasons. Happens in humans, happens in various animals, I do not see why it would be excluded in AI.
To be specific, a human willing to sacrifice their life for some goal (a volunteer soldier dying for their country, a mother saving her children from danger) obviously has agency.
On the other hand, these are not cases of indifference to their continued existence, but rather willingness to trade it for some goal.
A better example of “agenty, but not caring about self-preservation” humans might be a drug addict trying to get their drug but not caring about death from overdose, or a conscientious depressed person who is seriously contemplating their suicide while also performing their duties until the very last moment.