three most convincing arguments i know for OP’s thesis are:
atoms on earth are “close by” and thus much more valuable to fast running ASI than the atoms elsewhere.
(somewhat contrary to the previous argument), an ASI will be interested in quickly reaching the edge of the hubble volume, as that’s slipping behind the cosmic horizon — so it will starlift the sun for its initial energy budget.
robin hanson’s “grabby aliens” argument: witnessing a super-young universe (as we do) is strong evidence against it remaining compatible with biological life for long.
that said, i’m also very interested in the counter arguments (so thanks for linking to paul’s comments!) — especially if they’d suggest actions we could take in preparation.
I think point 2 is plausible but doesn’t super support the idea that it would eliminate the biosphere; if it cared a little, it could be fairly cheap to take some actions to preserve at least a version of it (including humans), even if starlifting the sun.
Point 1 is the argument which I most see as supporting the thesis that misaligned AI would eliminate humanity and the biosphere. And then I’m not sure how robust it is (it seems premised partly on translating our evolved intuitions about discount rates over to imagining the scenario from the perspective of the AI system).
I don’t think there are any amazing looking options. If goverments were generally more competent that would help.
Having some sort of apparatus for negotiating with rogue AIs could also help, but I expect this is politically infeasible and not that leveraged to advocate for on the margin.
Wait, how does the grabby aliens argument support this? I understand that it points to “the universe will be carved up between expansive spacefaring civilizations” (without reference to whether those are biological or not), and also to “the universe will cease to be a place where new biological civilizations can emerge” (without reference to what will happen to existing civilizations). But am I missing an inferential step?
i might be confused about this but “witnessing a super-early universe” seems to support “a typical universe moment is not generating observer moments for your reference class”. but, yeah, anthropics is very confusing, so i’m not confident in this.
“our reference class” includes roughly the observations we make before observing that we’re very early in the universe
This includes stuff like being a pre-singularity civilization
The anthropics here suggest there won’t be lots of civs later arising and being in our reference class and then finding that they’re much later in universe histories
It doesn’t speak to the existence or otherwise of future human-observer moments in a post-singularity civilization
… but as you say anthropics is confusing, so I might be getting this wrong.
three most convincing arguments i know for OP’s thesis are:
atoms on earth are “close by” and thus much more valuable to fast running ASI than the atoms elsewhere.
(somewhat contrary to the previous argument), an ASI will be interested in quickly reaching the edge of the hubble volume, as that’s slipping behind the cosmic horizon — so it will starlift the sun for its initial energy budget.
robin hanson’s “grabby aliens” argument: witnessing a super-young universe (as we do) is strong evidence against it remaining compatible with biological life for long.
that said, i’m also very interested in the counter arguments (so thanks for linking to paul’s comments!) — especially if they’d suggest actions we could take in preparation.
I think point 2 is plausible but doesn’t super support the idea that it would eliminate the biosphere; if it cared a little, it could be fairly cheap to take some actions to preserve at least a version of it (including humans), even if starlifting the sun.
Point 1 is the argument which I most see as supporting the thesis that misaligned AI would eliminate humanity and the biosphere. And then I’m not sure how robust it is (it seems premised partly on translating our evolved intuitions about discount rates over to imagining the scenario from the perspective of the AI system).
I’ve thought a bit about actions to reduce the probability that AI takeover involves violent conflict.
I don’t think there are any amazing looking options. If goverments were generally more competent that would help.
Having some sort of apparatus for negotiating with rogue AIs could also help, but I expect this is politically infeasible and not that leveraged to advocate for on the margin.
In preparation for what?
AI takeover.
Wait, how does the grabby aliens argument support this? I understand that it points to “the universe will be carved up between expansive spacefaring civilizations” (without reference to whether those are biological or not), and also to “the universe will cease to be a place where new biological civilizations can emerge” (without reference to what will happen to existing civilizations). But am I missing an inferential step?
i might be confused about this but “witnessing a super-early universe” seems to support “a typical universe moment is not generating observer moments for your reference class”. but, yeah, anthropics is very confusing, so i’m not confident in this.
OK hmm I think I understand what you mean.
I would have thought about it like this:
“our reference class” includes roughly the observations we make before observing that we’re very early in the universe
This includes stuff like being a pre-singularity civilization
The anthropics here suggest there won’t be lots of civs later arising and being in our reference class and then finding that they’re much later in universe histories
It doesn’t speak to the existence or otherwise of future human-observer moments in a post-singularity civilization
… but as you say anthropics is confusing, so I might be getting this wrong.
By my models of anthropics, I think this goes through.