The opportunity cost to spare earth is far larger than the cost to spare a random planet halfway across the universe. The AI starts on earth. If it can’t disassemble earth for spaceship mass, it has to send a small probe from earth to mars, and then disassemble mars instead. Which introduces a fair bit of delay. Not touching earth is a big restriction in the first few years and first few doublings. Once it gets to a few other solar systems, not touching earth becomes less importat of a restriction.
Of course, you can’t TDT trade with the AI because you have no acausal correlation with it. We can’t predict the AI’s actions well enough.
Intuition pump / generalising from fictional evidence: in the games Pandemic / Plague Inc. (where the player “controls” a pathogen and attempts to infect the whole human population on Earth), a lucky, early cross-border infection can help you win the game faster — more than the difference between a starting infected population of 1 vs 100,000.
This informs my intuition behind when the bonus of earlier spaceflight (through human help) could outweigh the penalty of not dismantling Earth.
When might human help outweigh the penalty of not dismantling Earth? It requires these conditions:
1. The AGI can very quickly reach an alternative source of materials: AGI spaceflight is superhuman.
AGI spacecraft, once in space, can reach eg. the Moon within hours, the Sun within a day
The AGI is willing to wait for additional computational power (it can wait until it has reached the Sun), but it really wants to leave Earth quickly
2. The AGI’s best alternative to a negotiated agreement is to lie in wait initially: AGI ground operations is initially weaker-than-human.
In the initial days, humans could reliably prevent the AGI from building or launching spacecraft
In the initial days, the AGI is vulnerable to human action, and would have chosen to lay low, and wouldn’t effectively begin dismantling Earth
3. If there is a negotiated agreement, then human help (or nonresistance) can allow the AGI to launch its first spacecrafts days earlier.
Relevant human decision makers recognize that the AGI will eventually win any conflict, and decide to instead start negotiating immediately
Relevant human decision makers can effectively coordinate multiple parts of the economy (to help the AGI), or (nonresistance) can effectively prevent others from interfering with the initially weak AGI
I now think that the conjunction of all these conditions is unlikely, so I agree that this negotiation is unlikely to work.
The opportunity cost to spare earth is far larger than the cost to spare a random planet halfway across the universe. The AI starts on earth. If it can’t disassemble earth for spaceship mass, it has to send a small probe from earth to mars, and then disassemble mars instead. Which introduces a fair bit of delay. Not touching earth is a big restriction in the first few years and first few doublings. Once it gets to a few other solar systems, not touching earth becomes less importat of a restriction.
Of course, you can’t TDT trade with the AI because you have no acausal correlation with it. We can’t predict the AI’s actions well enough.
Intuition pump / generalising from fictional evidence: in the games Pandemic / Plague Inc. (where the player “controls” a pathogen and attempts to infect the whole human population on Earth), a lucky, early cross-border infection can help you win the game faster — more than the difference between a starting infected population of 1 vs 100,000.
This informs my intuition behind when the bonus of earlier spaceflight (through human help) could outweigh the penalty of not dismantling Earth.
When might human help outweigh the penalty of not dismantling Earth? It requires these conditions:
1. The AGI can very quickly reach an alternative source of materials: AGI spaceflight is superhuman.
AGI spacecraft, once in space, can reach eg. the Moon within hours, the Sun within a day
The AGI is willing to wait for additional computational power (it can wait until it has reached the Sun), but it really wants to leave Earth quickly
2. The AGI’s best alternative to a negotiated agreement is to lie in wait initially: AGI ground operations is initially weaker-than-human.
In the initial days, humans could reliably prevent the AGI from building or launching spacecraft
In the initial days, the AGI is vulnerable to human action, and would have chosen to lay low, and wouldn’t effectively begin dismantling Earth
3. If there is a negotiated agreement, then human help (or nonresistance) can allow the AGI to launch its first spacecrafts days earlier.
Relevant human decision makers recognize that the AGI will eventually win any conflict, and decide to instead start negotiating immediately
Relevant human decision makers can effectively coordinate multiple parts of the economy (to help the AGI), or (nonresistance) can effectively prevent others from interfering with the initially weak AGI
I now think that the conjunction of all these conditions is unlikely, so I agree that this negotiation is unlikely to work.