Achieving high expected value may require making highly consequential decisions quickly, where “quickly” is relative to the amount of computation we use (or something like that), not clock time. If this is true, then we can’t afford to use up “logical time” or computation in a race with unaligned AI to capture resources while putting off these decisions. See following posts for some of the background ideas/intuitions:
My impression of commitment races and logical time is that the amount of computation we use in general doesn’t matter; but that things we learn that are relevant to the acausal bargaining problems do matter. Concretely, using computation during a competitive period to e.g. figure out better hardware cooling systems should be innocuous, because it matters very little for bargaining with other civilisations. However, thinking about agents in other worlds, and how to best bargain with them, would be a big step forward in logical time. This would mean that it’s fine to put off acausal decisions however long we want to, assuming that we don’t learn anything that’s relevant to them in the meantime.
More speculatively, this raises the issue of whether some things in the competitive period would be relevant for acausal bargaining. For example, causal bargaining with AIs on Earth could teach us something about acausal bargaining. If so, the competitive period would advance us in logical time. If we thought this was bad (which is definitely not obvious), maybe we could prevent it by making the competitive AI refuse to bargain with other worlds, and precommiting to eventually replacing it with a naive AI that hasn’t updated on anything that the competitive AI has learned. The naive AI would be as early in logical time as we were, when we coded it, so it would be as if the competitive period never happened.
(Logical) Time is of the essence
Achieving high expected value may require making highly consequential decisions quickly, where “quickly” is relative to the amount of computation we use (or something like that), not clock time. If this is true, then we can’t afford to use up “logical time” or computation in a race with unaligned AI to capture resources while putting off these decisions. See following posts for some of the background ideas/intuitions:
Beyond Astronomical Waste
The “Commitment Races” problem
In Logical Time, All Games are Iterated Games
My impression of commitment races and logical time is that the amount of computation we use in general doesn’t matter; but that things we learn that are relevant to the acausal bargaining problems do matter. Concretely, using computation during a competitive period to e.g. figure out better hardware cooling systems should be innocuous, because it matters very little for bargaining with other civilisations. However, thinking about agents in other worlds, and how to best bargain with them, would be a big step forward in logical time. This would mean that it’s fine to put off acausal decisions however long we want to, assuming that we don’t learn anything that’s relevant to them in the meantime.
More speculatively, this raises the issue of whether some things in the competitive period would be relevant for acausal bargaining. For example, causal bargaining with AIs on Earth could teach us something about acausal bargaining. If so, the competitive period would advance us in logical time. If we thought this was bad (which is definitely not obvious), maybe we could prevent it by making the competitive AI refuse to bargain with other worlds, and precommiting to eventually replacing it with a naive AI that hasn’t updated on anything that the competitive AI has learned. The naive AI would be as early in logical time as we were, when we coded it, so it would be as if the competitive period never happened.