If we’re simulating them perfectly, then they are in total using less compute than us. And their simulation of a third civilization in total uses less compute than they use. So this is only a way to pass control to smaller civilizations. This is insufficient, so I was thinking that it is actually necessary to predict what they do given a less-than-perfect simulation, which is a difficult problem.
Actually I think I misinterpreted your comment: you are proposing deciding whether to hand over control _without_ simulating them actually running AI code. But this is actually an extrapolation: before most of the cognitive work of their civilization is done by AI, their own computation of simulations will be interleaved with their own biological computation. So you can’t just simulate them up to the point where they hand off control to AI, since there will be a bunch of computation before then too.
Basically, you are predicting what the aliens do in situation X, without actually being able to simulate situation X yourself. This is an extrapolation, and understanding of alien social dynamics would be necessary to predict this accurately.
Basically, you are predicting what the aliens do in situation X, without actually being able to simulate situation X yourself. This is an extrapolation, and understanding of alien social dynamics would be necessary to predict this accurately.
I agree that you’d need to do some reasoning rather than being able to simulate the entire world and see what happens. But you can really afford quite a lot of simulating if you had the ability to rerun evolution, so you can e.g. probe the entire landscape of what they would do under different conditions (including what groups would do). The hardest things to simulate are the results of the experiments they run, but again you can probe the entire range of possibilities. You can also probably recruit aliens to help explain what the important features of the situation are and how the key decisions are likely to be made, if you can’t form good models by using the extensive simulations.
The complexity of simulating “a human who thinks they’ve seen expensive computation X” seems much closer to the complexity of simulating a human brain than to the complexity of simulating X.
If we’re simulating them perfectly, then they are in total using less compute than us. And their simulation of a third civilization in total uses less compute than they use. So this is only a way to pass control to smaller civilizations. This is insufficient, so I was thinking that it is actually necessary to predict what they do given a less-than-perfect simulation, which is a difficult problem.
Actually I think I misinterpreted your comment: you are proposing deciding whether to hand over control _without_ simulating them actually running AI code. But this is actually an extrapolation: before most of the cognitive work of their civilization is done by AI, their own computation of simulations will be interleaved with their own biological computation. So you can’t just simulate them up to the point where they hand off control to AI, since there will be a bunch of computation before then too.
Basically, you are predicting what the aliens do in situation X, without actually being able to simulate situation X yourself. This is an extrapolation, and understanding of alien social dynamics would be necessary to predict this accurately.
I agree that you’d need to do some reasoning rather than being able to simulate the entire world and see what happens. But you can really afford quite a lot of simulating if you had the ability to rerun evolution, so you can e.g. probe the entire landscape of what they would do under different conditions (including what groups would do). The hardest things to simulate are the results of the experiments they run, but again you can probe the entire range of possibilities. You can also probably recruit aliens to help explain what the important features of the situation are and how the key decisions are likely to be made, if you can’t form good models by using the extensive simulations.
The complexity of simulating “a human who thinks they’ve seen expensive computation X” seems much closer to the complexity of simulating a human brain than to the complexity of simulating X.