On the specifics, Bostrom’s simulation argument is more than just a parallel here: it has an impact on how rich we might expect our direct parent simulator to be.
The simulation argument applies similarly to one base world like ours, or to an uncountable number of parallel worlds embedded in Tegmark IV structures. Either way, if you buy case 3, the proportion of simulated-by-a-world-like-ours worlds rises close to 1 (I’m counting worlds “depth-first”, since it seems most intuitive, and infinite simulation depth from worlds like ours seems impossible).
If Tegmark’s picture is accurate, we’d expect to be embedded in some hugely richer base structure—but in Bostrom’s case 3 we’d likely have to get through N levels of worlds-like-ours first. While that wouldn’t significantly change the amount of value on the table, it might make it a lot harder for us to exert influence on the most valuable structures.
This probably argues for your overall point: we’re not the best minds to be making such calculations (either on the answers, or on the expected utility of finding good answers).
If Tegmark’s picture is accurate, we’d expect to be embedded in some hugely richer base structure—but in Bostrom’s case 3 we’d likely have to get through N levels of worlds-like-ours first. While that wouldn’t significantly change the amount of value on the table, it might make it a lot harder for us to exert influence on the most valuable structures.
I’m not sure it makes sense to talk about “expect” here. (I’m confused about anthropics and especially about first-person subjective expectations.) But if you take the third-person UDT-like perspective here, we’re directly embedded in some hugely richer base structures, and also indirectly embedded via N levels of worlds-like-ours, and having more of the latter doesn’t reduce how much value (in the UDT-utility sense) we can gain by influencing the former; it just gives us more options that we can choose to take or not. In other words, we always have the option of pretending the latter don’t exist and just optimize for exerting influence via the direct embeddings.
On second thought, it does increase the opportunity cost of exerting such influence, because we’d be spending resources in both the directly embedded worlds and the indirectly-embedded worlds to do that. To get around this, the eventual superintelligence doing this could wait until such a time in our universe that Bostrom’s proposition 3 isn’t true anymore (or true to a lesser extent) before trying to influence richer universes, since presumably only the historically interesting periods of our universe are heavily simulated by worlds-like-ours.
I’d been primarily thinking about more simple-minded escape/uplift/signal-to-simulators influence (via this us), rather than UDT-influence. If we were ever uplifted or escaped, I’d expect it’d be into a world-like-ours. But of course you’re correct that UDT-style influence would apply immediately.
Opportunity costs are a consideration, though there may be behaviours that’d increase expected value in both direct-embeddings and worlds-like-ours. Win-win behaviours could be taken early.
Personally, I’d expect this not to impact our short/medium-term actions much (outside of AI design). The universe looks to be self-similar enough that any strategy requiring only local action would use a tiny fraction of available resources.
I think the real difficulty is only likely to show up once a SI has provided a richer picture of the universe than we’re able to understand/accept, and it happens to suggest radically different resource allocations.
Most people are going to be fine with “I want to take the energy of one unused star and do philosophical/astronomical calculations”; fewer with “Based on {something beyond understanding}, I’m allocating 99.99% of the energy in every reachable galaxy to {seemingly senseless waste}”.
I just hope the class of actions that are vastly important, costly, and hard to show clear motivation for, is small.
Thanks. I agree with your overall conclusions.
On the specifics, Bostrom’s simulation argument is more than just a parallel here: it has an impact on how rich we might expect our direct parent simulator to be.
The simulation argument applies similarly to one base world like ours, or to an uncountable number of parallel worlds embedded in Tegmark IV structures. Either way, if you buy case 3, the proportion of simulated-by-a-world-like-ours worlds rises close to 1 (I’m counting worlds “depth-first”, since it seems most intuitive, and infinite simulation depth from worlds like ours seems impossible).
If Tegmark’s picture is accurate, we’d expect to be embedded in some hugely richer base structure—but in Bostrom’s case 3 we’d likely have to get through N levels of worlds-like-ours first. While that wouldn’t significantly change the amount of value on the table, it might make it a lot harder for us to exert influence on the most valuable structures.
This probably argues for your overall point: we’re not the best minds to be making such calculations (either on the answers, or on the expected utility of finding good answers).
I’m not sure it makes sense to talk about “expect” here. (I’m confused about anthropics and especially about first-person subjective expectations.) But if you take the third-person UDT-like perspective here, we’re directly embedded in some hugely richer base structures, and also indirectly embedded via N levels of worlds-like-ours, and having more of the latter doesn’t reduce how much value (in the UDT-utility sense) we can gain by influencing the former; it just gives us more options that we can choose to take or not. In other words, we always have the option of pretending the latter don’t exist and just optimize for exerting influence via the direct embeddings.
On second thought, it does increase the opportunity cost of exerting such influence, because we’d be spending resources in both the directly embedded worlds and the indirectly-embedded worlds to do that. To get around this, the eventual superintelligence doing this could wait until such a time in our universe that Bostrom’s proposition 3 isn’t true anymore (or true to a lesser extent) before trying to influence richer universes, since presumably only the historically interesting periods of our universe are heavily simulated by worlds-like-ours.
That seems right.
I’d been primarily thinking about more simple-minded escape/uplift/signal-to-simulators influence (via this us), rather than UDT-influence. If we were ever uplifted or escaped, I’d expect it’d be into a world-like-ours. But of course you’re correct that UDT-style influence would apply immediately.
Opportunity costs are a consideration, though there may be behaviours that’d increase expected value in both direct-embeddings and worlds-like-ours. Win-win behaviours could be taken early.
Personally, I’d expect this not to impact our short/medium-term actions much (outside of AI design). The universe looks to be self-similar enough that any strategy requiring only local action would use a tiny fraction of available resources.
I think the real difficulty is only likely to show up once a SI has provided a richer picture of the universe than we’re able to understand/accept, and it happens to suggest radically different resource allocations.
Most people are going to be fine with “I want to take the energy of one unused star and do philosophical/astronomical calculations”; fewer with “Based on {something beyond understanding}, I’m allocating 99.99% of the energy in every reachable galaxy to {seemingly senseless waste}”.
I just hope the class of actions that are vastly important, costly, and hard to show clear motivation for, is small.