I agree we should treat animals well, and the simulation argument provides a bit of extra reason to do so. I don’t think it’s a comparably strong case to the AI being kind to the humans though: I don’t expect many humans in the Future running simulations where crows build industrial civilization and primates get stuck on the level of baboons, then rewarding the crows if they treat the baboons well. Similarly, I would be quite surprised if we were in a simulation whose point is to be kind to crows. I agree it’s possible that the simulators care about animal-welfare, but I would include that under general morality, and I don’t think we have a particular reason to believe that the smarter animals have more simulators supporting them.
Smarter animals (or rather, smarter animals from, say, 50 million years ago) have a higher fraction of the lightcone under the ownership of their descendants who invented friendly AGI, right? They might want to bargain with human-owned FAI universes.
Yeah, they might, but I don’t really expect them to care too much about their crow-level non-sapient relatives, just like we don’t care much more about baboons than about hippos. While I expect that our descendant will care quite a lot about 2024-humans, as some of them will in fact be 2024-humans who lived through the Singularity and remember being afraid of the AI killing their family, and wished there were commitments for the future that would incentivize the AI to leave their families alive if the AI wins. I think it’s an important disanalogy that there weren’t crows who thought 50 million years ago during the famous crow-primate war that if they win, they really want to create simulations that incentivize the primates to treat them well in the worlds where the primates win.
I agree we should treat animals well, and the simulation argument provides a bit of extra reason to do so. I don’t think it’s a comparably strong case to the AI being kind to the humans though: I don’t expect many humans in the Future running simulations where crows build industrial civilization and primates get stuck on the level of baboons, then rewarding the crows if they treat the baboons well. Similarly, I would be quite surprised if we were in a simulation whose point is to be kind to crows. I agree it’s possible that the simulators care about animal-welfare, but I would include that under general morality, and I don’t think we have a particular reason to believe that the smarter animals have more simulators supporting them.
Smarter animals (or rather, smarter animals from, say, 50 million years ago) have a higher fraction of the lightcone under the ownership of their descendants who invented friendly AGI, right? They might want to bargain with human-owned FAI universes.
Yeah, they might, but I don’t really expect them to care too much about their crow-level non-sapient relatives, just like we don’t care much more about baboons than about hippos. While I expect that our descendant will care quite a lot about 2024-humans, as some of them will in fact be 2024-humans who lived through the Singularity and remember being afraid of the AI killing their family, and wished there were commitments for the future that would incentivize the AI to leave their families alive if the AI wins. I think it’s an important disanalogy that there weren’t crows who thought 50 million years ago during the famous crow-primate war that if they win, they really want to create simulations that incentivize the primates to treat them well in the worlds where the primates win.