In this scenario I just don’t think “rational” and “altruistic” are compatible (not even using an approximation to an acausal decision theory and assuming that so do other people, which makes altruism rational in the prisoners’ dilemma). I’d just run the fuck away.
Anyway, this isn’t analogous at all to the scenario in the study. There’s no way you’ll harm anyone (including yourself) by asking the researcher if she’s OK and (say) calling an ambulance or something if she isn’t.
I think there is a very good case that a hypothetical rational agent motivated entirely by helping others would run away—that doesn’t mean it isn’t altruistic, just that there is realistically nothing to do to help others except leave to survive and find other others to help.
In this scenario I just don’t think “rational” and “altruistic” are compatible (not even using an approximation to an acausal decision theory and assuming that so do other people, which makes altruism rational in the prisoners’ dilemma). I’d just run the fuck away.
Anyway, this isn’t analogous at all to the scenario in the study. There’s no way you’ll harm anyone (including yourself) by asking the researcher if she’s OK and (say) calling an ambulance or something if she isn’t.
I think there is a very good case that a hypothetical rational agent motivated entirely by helping others would run away—that doesn’t mean it isn’t altruistic, just that there is realistically nothing to do to help others except leave to survive and find other others to help.