I’d been primarily thinking about more simple-minded escape/uplift/signal-to-simulators influence (via this us), rather than UDT-influence. If we were ever uplifted or escaped, I’d expect it’d be into a world-like-ours. But of course you’re correct that UDT-style influence would apply immediately.
Opportunity costs are a consideration, though there may be behaviours that’d increase expected value in both direct-embeddings and worlds-like-ours. Win-win behaviours could be taken early.
Personally, I’d expect this not to impact our short/medium-term actions much (outside of AI design). The universe looks to be self-similar enough that any strategy requiring only local action would use a tiny fraction of available resources.
I think the real difficulty is only likely to show up once a SI has provided a richer picture of the universe than we’re able to understand/accept, and it happens to suggest radically different resource allocations.
Most people are going to be fine with “I want to take the energy of one unused star and do philosophical/astronomical calculations”; fewer with “Based on {something beyond understanding}, I’m allocating 99.99% of the energy in every reachable galaxy to {seemingly senseless waste}”.
I just hope the class of actions that are vastly important, costly, and hard to show clear motivation for, is small.
That seems right.
I’d been primarily thinking about more simple-minded escape/uplift/signal-to-simulators influence (via this us), rather than UDT-influence. If we were ever uplifted or escaped, I’d expect it’d be into a world-like-ours. But of course you’re correct that UDT-style influence would apply immediately.
Opportunity costs are a consideration, though there may be behaviours that’d increase expected value in both direct-embeddings and worlds-like-ours. Win-win behaviours could be taken early.
Personally, I’d expect this not to impact our short/medium-term actions much (outside of AI design). The universe looks to be self-similar enough that any strategy requiring only local action would use a tiny fraction of available resources.
I think the real difficulty is only likely to show up once a SI has provided a richer picture of the universe than we’re able to understand/accept, and it happens to suggest radically different resource allocations.
Most people are going to be fine with “I want to take the energy of one unused star and do philosophical/astronomical calculations”; fewer with “Based on {something beyond understanding}, I’m allocating 99.99% of the energy in every reachable galaxy to {seemingly senseless waste}”.
I just hope the class of actions that are vastly important, costly, and hard to show clear motivation for, is small.