I’ve had the idea that AGI may, along the lines of timeless decision theory, decide to spare weaker life forms in order to acausally increase the probability that other AGIs greater than itself spare it—but I’m not good at acausal reasoning and this sounds far too optimistic and quasi-religious for me, so I put very small (though nonzero) credence in the idea that it would be a convergent strategy.
Also, humans clearly are not following this rule, given that vegans (even more so abolitionists) are a tiny minority of the population.
I’ll say I definitely think it’s too optimistic and I don’t much too much stock into it. Still, I think it’s worth thinking about.
Yes, absolutely we are not following the rule. The reason why I think it might change with an AGI: (1) currently we humans, despite what we say when we talk about aliens, still place a high prior on being alone in the universe, or from dominant religious perspectives, that we are the most intelligent. Those things combine to make us think there are no consequences to our actions against other life. An AGI, itself a proof of concept that there can be levels of general intelligence, may have more reason to be cautious. (2) Humans are not as rational. Not that a rational human would decide to be vegan—maybe with our priors, we have little reason to suspect that higher powers would care—especially since it seems to be the norm of the animal kingdom already. But, in terms of rationality, some humans are pretty good at taking very dangerous risks, risks that perhaps an AGI may be more cautious about. (3) There’s something to be said about degrees of following the rule. At one point humans were pretty confident about doing whatever they wanted to nature, nowadays at least some proportion of the population wants to at least, not destroy it all. Partly for self preservation reasons, but also partly for intrinsic value. (and probably 0% for fear of god-like retribution, to be fair, haha). I doubt the acausal reasoning would make an AGI conclude it can never harm any humans, but perhaps “spare at least x%”.
I think the main counterargument would be the fear of us creating a new AGI, so it may come down to how much effort the AGI has to expend to monitor/prevent that from happening.
I’ve had the idea that AGI may, along the lines of timeless decision theory, decide to spare weaker life forms in order to acausally increase the probability that other AGIs greater than itself spare it—but I’m not good at acausal reasoning and this sounds far too optimistic and quasi-religious for me, so I put very small (though nonzero) credence in the idea that it would be a convergent strategy.
Also, humans clearly are not following this rule, given that vegans (even more so abolitionists) are a tiny minority of the population.
I’ll say I definitely think it’s too optimistic and I don’t much too much stock into it. Still, I think it’s worth thinking about.
Yes, absolutely we are not following the rule. The reason why I think it might change with an AGI: (1) currently we humans, despite what we say when we talk about aliens, still place a high prior on being alone in the universe, or from dominant religious perspectives, that we are the most intelligent. Those things combine to make us think there are no consequences to our actions against other life. An AGI, itself a proof of concept that there can be levels of general intelligence, may have more reason to be cautious. (2) Humans are not as rational. Not that a rational human would decide to be vegan—maybe with our priors, we have little reason to suspect that higher powers would care—especially since it seems to be the norm of the animal kingdom already. But, in terms of rationality, some humans are pretty good at taking very dangerous risks, risks that perhaps an AGI may be more cautious about. (3) There’s something to be said about degrees of following the rule. At one point humans were pretty confident about doing whatever they wanted to nature, nowadays at least some proportion of the population wants to at least, not destroy it all. Partly for self preservation reasons, but also partly for intrinsic value. (and probably 0% for fear of god-like retribution, to be fair, haha). I doubt the acausal reasoning would make an AGI conclude it can never harm any humans, but perhaps “spare at least x%”.
I think the main counterargument would be the fear of us creating a new AGI, so it may come down to how much effort the AGI has to expend to monitor/prevent that from happening.