So far, my tentative conclusion is that believing that we are probably in a simulation shouldn’t really affect our actions.
Well, you should avoid doing things that are severely offensive to Corvid-god and Cetacean-god and Neanderthal-god and Elephant-god, etc., at least to an extent comparable to how you think an AI should orient itself toward monkeys if it thinks it’s in your simulation.
I think that we should indeed consider what the corvid-god wants at the same point in the future where we’re considering building the simulations David describes in this post.
More directly: David isn’t proposing we should do particularly different things now, he’s just noting an argument that we might take actions later that affect whether unaligned AIs kill us.
That’s not when you consider it, you consider it at the first point when you could make agreements with your simulators. But some people think that you can already do this; if you think you can already do this, then you should right now stop being mean to corvids because the Corvid-god would want to give you a substantial amount of what you like in exchange for you stopping ASAP being mean to corvids.
Actually the AI can use powerful sims here: if the AI holds off on killing us until it makes the powerful sims, then if the acausal trade proposed here doesn’t work out, it can just kill us then. That lets it avoid the cost of letting us have the tiny share of sunlight, though not the costs of keeping us alive during its early capabilities explosion.
Yep fair point. Those AIs will plausibly have much more thought put into this stuff than we currently have, but I agree the asymmetry is smaller than I made it sound.
I agree we should treat animals well, and the simulation argument provides a bit of extra reason to do so. I don’t think it’s a comparably strong case to the AI being kind to the humans though: I don’t expect many humans in the Future running simulations where crows build industrial civilization and primates get stuck on the level of baboons, then rewarding the crows if they treat the baboons well. Similarly, I would be quite surprised if we were in a simulation whose point is to be kind to crows. I agree it’s possible that the simulators care about animal-welfare, but I would include that under general morality, and I don’t think we have a particular reason to believe that the smarter animals have more simulators supporting them.
Smarter animals (or rather, smarter animals from, say, 50 million years ago) have a higher fraction of the lightcone under the ownership of their descendants who invented friendly AGI, right? They might want to bargain with human-owned FAI universes.
Yeah, they might, but I don’t really expect them to care too much about their crow-level non-sapient relatives, just like we don’t care much more about baboons than about hippos. While I expect that our descendant will care quite a lot about 2024-humans, as some of them will in fact be 2024-humans who lived through the Singularity and remember being afraid of the AI killing their family, and wished there were commitments for the future that would incentivize the AI to leave their families alive if the AI wins. I think it’s an important disanalogy that there weren’t crows who thought 50 million years ago during the famous crow-primate war that if they win, they really want to create simulations that incentivize the primates to treat them well in the worlds where the primates win.
Well, you should avoid doing things that are severely offensive to Corvid-god and Cetacean-god and Neanderthal-god and Elephant-god, etc., at least to an extent comparable to how you think an AI should orient itself toward monkeys if it thinks it’s in your simulation.
I think that we should indeed consider what the corvid-god wants at the same point in the future where we’re considering building the simulations David describes in this post.
More directly: David isn’t proposing we should do particularly different things now, he’s just noting an argument that we might take actions later that affect whether unaligned AIs kill us.
That’s not when you consider it, you consider it at the first point when you could make agreements with your simulators. But some people think that you can already do this; if you think you can already do this, then you should right now stop being mean to corvids because the Corvid-god would want to give you a substantial amount of what you like in exchange for you stopping ASAP being mean to corvids.
Notably, David is proposing that AIs take different actions prior to making powerful sims: not kill all the humans.
Actually the AI can use powerful sims here: if the AI holds off on killing us until it makes the powerful sims, then if the acausal trade proposed here doesn’t work out, it can just kill us then. That lets it avoid the cost of letting us have the tiny share of sunlight, though not the costs of keeping us alive during its early capabilities explosion.
Yes, but most of the expected cost is in keeping the humans alive/happy prior to being really smart.
This cost presumably goes way down if it kills everyone physically and scans their brains, but people obviously don’t want this.
I agree. But people often refer to the cost of the solar output that goes to earth, and that particular cost doesn’t get paid until late.
Yep fair point. Those AIs will plausibly have much more thought put into this stuff than we currently have, but I agree the asymmetry is smaller than I made it sound.
I agree we should treat animals well, and the simulation argument provides a bit of extra reason to do so. I don’t think it’s a comparably strong case to the AI being kind to the humans though: I don’t expect many humans in the Future running simulations where crows build industrial civilization and primates get stuck on the level of baboons, then rewarding the crows if they treat the baboons well. Similarly, I would be quite surprised if we were in a simulation whose point is to be kind to crows. I agree it’s possible that the simulators care about animal-welfare, but I would include that under general morality, and I don’t think we have a particular reason to believe that the smarter animals have more simulators supporting them.
Smarter animals (or rather, smarter animals from, say, 50 million years ago) have a higher fraction of the lightcone under the ownership of their descendants who invented friendly AGI, right? They might want to bargain with human-owned FAI universes.
Yeah, they might, but I don’t really expect them to care too much about their crow-level non-sapient relatives, just like we don’t care much more about baboons than about hippos. While I expect that our descendant will care quite a lot about 2024-humans, as some of them will in fact be 2024-humans who lived through the Singularity and remember being afraid of the AI killing their family, and wished there were commitments for the future that would incentivize the AI to leave their families alive if the AI wins. I think it’s an important disanalogy that there weren’t crows who thought 50 million years ago during the famous crow-primate war that if they win, they really want to create simulations that incentivize the primates to treat them well in the worlds where the primates win.