redesigning and going through the effort of replacing it isn’t the most valuable course of action on the margin.
Such effort would most likely be a trivial expenditure compared to the resources those actions are about acquiring, and wouldn’t be as likely to entail significant opportunity costs as in the case of humans taking those actions, as AIs could parallelize their efforts when needed.
The number of Von Neumann probes one can produce should go up the more planetary material is used, so I’m not sure the adequacy of Mercury helps much. If one produces fewer probes, the expansion time (while still an exponential) starts out much slower, and at any given time growth rate would be significantly lower than it otherwise would have been.
There is a large disjunction of possible optimal behaviors, and some of these might be pursued simultaneously for the sake of avoiding risks by reserving options. Most things that look like making optimal use of resources in our solar system without considering human values are going to kill all humans.
it’s not obvious to me that even Gods eat stars in less than centuries
Same, but it’d be about what portion of the sun’s output is captured, not rate of disassembly.
I expect an A.G.I. to only have so many “workers” / actuators / “arms” at a given time, and to only be able to pay attention to so many things
If this were a significant bottleneck, building new actuators or running in parallel to avoid attentional limitations would be made a high priority. I wouldn’t expect a capable AI to be significantly limited in this way for long.
I am only say 98% sure an A.G.I. would still care about getting more energy at this stage, and not something else we have no name for.
An AI might not want to be highly visible to the cosmic environment and so not dim the star noticeably, or stand to get much more from acausal trade (these would still usually entail using the local resources optimally relative to those trades), or have access to negentropy stores far more vast than entailed by exploiting large celestial bodies (but what could cause the system to become fully neutral to the previously accessible resources? It would be tremendously surprising to not entail using or dissipating those resources so no competitors can arise from their use.) More energy would most likely mean earlier starts on any critical phases of its plan(s), better ability to conclude plans will work, and better ability to verify plans have worked.
the economic calculation isn’t actually trivial
True, but some parts of the situation are easier to predict than others, e.g. there’s a conjunction of many factors necessary to support human life (surface temperature as influenced by the sun / compute / atmosphere, lack of disassembly for resources, atmospheric toxicity / presence at all, strength of Earth’s magnetic field, etc), and conditioned on extreme scale unaligned AI projects that would plausibly touch many of these critical factors, the probability of survival comes out quite low for most settings of how it could go about them.
if trade really has continued with the increasingly competent society of humans on Earth, we might not need the sun to survive
I think there are actually a very narrow range of possibilities where A.G.I. kills us ONLY because it’s afraid we’ll build another A.G.I. We aren’t nearly competent enough to get away with that in real life.
If we’re conditioning on getting an unaligned ASI, and humans are still trying to produce a friendly competitor, this seems highly likely to result in being squished. In that scenario, we’d already be conditioning on having been able to build a first AGI, so a second becomes highly probable.
The most plausible versions to me entail behaviors that either don’t look like they’re considering the presence of humans (because they don’t need to) and result in everyone dying, or are optimally exploiting the presence of humans via short-term persuasion and then repurposing humans for acausal trade scenarios or discarding them. It does seem fair to doubt we’d be given an opportunity to build a competitor, but humanity in a condition where it is unable to build AI for reasons other than foresight seems overwhelmingly likely to entail doom.
While we could be surprised by the outcome, and possibly for reasons you’ve mentioned, it still seems most probable that (given an unaligned capable AI) very capable grabbing of resources in ways that kill humans would occur, and that many rationalists are mostly working from the right model there.
I would be modestly surprised, but not very surprised, if an A.G.I. could cause build a Dyson sphere causing the sun to be dimmed by >20% in less than a couple decades (I think a few percent isn’t enough to cause crop failure), but within a century is plausible to me.
I don’t think we would be squashed for our potential to build a competitor. I think that a competitor would no longer be a serious threat once an A.G.I. seized all available compute.
I give a little more credence to various “unknown unknowns” about the laws of physics and the priorities of superintelligences implying that an A.G.I. would no longer care to exploit the resources we need.
Overall rationalists are right to worry about being killed by A.G.I.
I appreciate the speculation about this.
Such effort would most likely be a trivial expenditure compared to the resources those actions are about acquiring, and wouldn’t be as likely to entail significant opportunity costs as in the case of humans taking those actions, as AIs could parallelize their efforts when needed.
The number of Von Neumann probes one can produce should go up the more planetary material is used, so I’m not sure the adequacy of Mercury helps much. If one produces fewer probes, the expansion time (while still an exponential) starts out much slower, and at any given time growth rate would be significantly lower than it otherwise would have been.
There is a large disjunction of possible optimal behaviors, and some of these might be pursued simultaneously for the sake of avoiding risks by reserving options. Most things that look like making optimal use of resources in our solar system without considering human values are going to kill all humans.
Same, but it’d be about what portion of the sun’s output is captured, not rate of disassembly.
If this were a significant bottleneck, building new actuators or running in parallel to avoid attentional limitations would be made a high priority. I wouldn’t expect a capable AI to be significantly limited in this way for long.
An AI might not want to be highly visible to the cosmic environment and so not dim the star noticeably, or stand to get much more from acausal trade (these would still usually entail using the local resources optimally relative to those trades), or have access to negentropy stores far more vast than entailed by exploiting large celestial bodies (but what could cause the system to become fully neutral to the previously accessible resources? It would be tremendously surprising to not entail using or dissipating those resources so no competitors can arise from their use.) More energy would most likely mean earlier starts on any critical phases of its plan(s), better ability to conclude plans will work, and better ability to verify plans have worked.
True, but some parts of the situation are easier to predict than others, e.g. there’s a conjunction of many factors necessary to support human life (surface temperature as influenced by the sun / compute / atmosphere, lack of disassembly for resources, atmospheric toxicity / presence at all, strength of Earth’s magnetic field, etc), and conditioned on extreme scale unaligned AI projects that would plausibly touch many of these critical factors, the probability of survival comes out quite low for most settings of how it could go about them.
If we’re conditioning on getting an unaligned ASI, and humans are still trying to produce a friendly competitor, this seems highly likely to result in being squished. In that scenario, we’d already be conditioning on having been able to build a first AGI, so a second becomes highly probable.
The most plausible versions to me entail behaviors that either don’t look like they’re considering the presence of humans (because they don’t need to) and result in everyone dying, or are optimally exploiting the presence of humans via short-term persuasion and then repurposing humans for acausal trade scenarios or discarding them. It does seem fair to doubt we’d be given an opportunity to build a competitor, but humanity in a condition where it is unable to build AI for reasons other than foresight seems overwhelmingly likely to entail doom.
While we could be surprised by the outcome, and possibly for reasons you’ve mentioned, it still seems most probable that (given an unaligned capable AI) very capable grabbing of resources in ways that kill humans would occur, and that many rationalists are mostly working from the right model there.
I agree with most of this.
I would be modestly surprised, but not very surprised, if an A.G.I. could cause build a Dyson sphere causing the sun to be dimmed by >20% in less than a couple decades (I think a few percent isn’t enough to cause crop failure), but within a century is plausible to me.
I don’t think we would be squashed for our potential to build a competitor. I think that a competitor would no longer be a serious threat once an A.G.I. seized all available compute.
I give a little more credence to various “unknown unknowns” about the laws of physics and the priorities of superintelligences implying that an A.G.I. would no longer care to exploit the resources we need.
Overall rationalists are right to worry about being killed by A.G.I.