I think that we should indeed consider what the corvid-god wants at the same point in the future where we’re considering building the simulations David describes in this post.
More directly: David isn’t proposing we should do particularly different things now, he’s just noting an argument that we might take actions later that affect whether unaligned AIs kill us.
That’s not when you consider it, you consider it at the first point when you could make agreements with your simulators. But some people think that you can already do this; if you think you can already do this, then you should right now stop being mean to corvids because the Corvid-god would want to give you a substantial amount of what you like in exchange for you stopping ASAP being mean to corvids.
Actually the AI can use powerful sims here: if the AI holds off on killing us until it makes the powerful sims, then if the acausal trade proposed here doesn’t work out, it can just kill us then. That lets it avoid the cost of letting us have the tiny share of sunlight, though not the costs of keeping us alive during its early capabilities explosion.
Yep fair point. Those AIs will plausibly have much more thought put into this stuff than we currently have, but I agree the asymmetry is smaller than I made it sound.
I think that we should indeed consider what the corvid-god wants at the same point in the future where we’re considering building the simulations David describes in this post.
More directly: David isn’t proposing we should do particularly different things now, he’s just noting an argument that we might take actions later that affect whether unaligned AIs kill us.
That’s not when you consider it, you consider it at the first point when you could make agreements with your simulators. But some people think that you can already do this; if you think you can already do this, then you should right now stop being mean to corvids because the Corvid-god would want to give you a substantial amount of what you like in exchange for you stopping ASAP being mean to corvids.
Notably, David is proposing that AIs take different actions prior to making powerful sims: not kill all the humans.
Actually the AI can use powerful sims here: if the AI holds off on killing us until it makes the powerful sims, then if the acausal trade proposed here doesn’t work out, it can just kill us then. That lets it avoid the cost of letting us have the tiny share of sunlight, though not the costs of keeping us alive during its early capabilities explosion.
Yes, but most of the expected cost is in keeping the humans alive/happy prior to being really smart.
This cost presumably goes way down if it kills everyone physically and scans their brains, but people obviously don’t want this.
I agree. But people often refer to the cost of the solar output that goes to earth, and that particular cost doesn’t get paid until late.
Yep fair point. Those AIs will plausibly have much more thought put into this stuff than we currently have, but I agree the asymmetry is smaller than I made it sound.