The Sun is the most nutritious thing that’s reasonably close. It’s only 8 light-minutes away, yet contains the vast majority of mass within 4 light-years of the Earth. The next-nearest star, Proxima Centauri, is about 4.25 light-years away.
By “nutritious”, I mean it has a lot of what’s needed for making computers: mass-energy. In “Ultimate physical limits to computation”, Seth Lloyd imagines an “ultimate laptop” which is the theoretically best computer that is 1 kilogram of mass, contained in 1 liter. He notes a limit to calculations per second that is proportional to the energy of the computer, which is mostly locked in its mass (E = mc²). Such an energy-proportional limit applies to memory too. Energy need not be expended quickly in the course of calculation, due to reversible computing.
So, you need energy to make computers out of (much more than you need energy to power them). And, within 4 light-years, the Sun is where almost all of that energy is. Of course, we don’t have the technology to eat the Sun, so it isn’t really our decision to make. But, when will someone or something be making this decision?
Artificial intelligence that is sufficiently advanced could do everything a human could do, better and faster. If humans could eventually design machines that eat the Sun, then sufficiently advanced AI could do so faster. There is some disagreement about “takeoff speeds”, that is, the time between when AI is about as intelligent as humans, to when it is far far more intelligent.
My argument is that, when AI is far far more intelligent than humans, it will understand the Sun as the most nutritious entity that is within 4 light-years, and eat it within a short time frame. It really is convergently instrumental to eat the Sun, in the sense of repurposing at least 50% its mass-energy to make machines including computers and their supporting infrastructure (“computronium”).
I acknowledge that some readers may think the Sun will never be eaten. Perhaps it sounds like sci-fi to them. Here, I will argue that Sun-eating is probable within the next 10,000 years.
Technological development has a ratchet effect: good technologies get invented, but usually don’t get lost, unless they weren’t very important/valuable (compared to other available technologies). Empirically, the rate of discovery seems to be increasing. To the extent pre-humans even had technology, it was developed a lot more slowly. Technology seems to be advancing a lot faster in the last 1000 years than it was from 5000 BC to 4000 BC. Part of the reason for the change in rate is that technologies build on other technologies; for example, the technology of computers allows discovery of other technologies through computational modeling.
So, we are probably approaching a stage where technology develops very quickly. Eventually, the rate of technology development will go down, due to depletion of low-hanging fruit. But before then, in the regime where technology is developing very rapidly, it will be both feasible and instrumentally important to run more computations, quickly. Computation is needed to research technologies, among other uses. Running sufficiently difficult computations requires eating the Sun, and will be feasible at some technology level, which itself probably doesn’t require eating the Sun (eating the Earth probably provides more than enough energy to have enough computational power to figure out the technology to eat the Sun).
Let’s further examine the motive for creating many machines, including computers, quickly. Roughly, we can consider two different regimes of fast technology development: coordinated and uncoordinated.
-
A coordinated regime will act like a single agent (or “singleton”), even if it’s composed of multiple agents. This regime would do some kind of long-termist optimization (in this setting, even a few years is pretty long-term). Of course, it would want to discover technology quickly, all else being equal (due to astronomical waste considerations). But it might be somewhat “environmentalist” in terms of avoiding making hard-to-reverse decisions, like expending a lot of energy. I still think it would eat the Sun, on the basis that it can later convert these machines to other machines, if desired (it has access to many technologies, after all).
-
In an uncoordinated regime, multiple agents compete for resources and control. Broadly, having more machines (including computers) and more technology grants a competitive advantage. That is a strong incentive to turn the Sun into machines and develop technologies quickly. Perhaps an uncoordinated regime can transition to a coordinated one, as either there is a single victor, or the most competitive players start coordinating.
This concludes the argument that the Sun will be largely eaten in the next 10,000 years. It really will be a major event in the history of the solar system. Usually, not much happens to the Sun in 10,000 years. And I really think I’m being conservative in saying 10,000. This would in typical discourse be considered “very long ASI timelines”, under the assumption that ASI eats the Sun within a few years.
Thinking about the timing of Sun-eating seems more well-defined, and potentially precise, than thinking about the timeline of “human-level AGI” or “ASI”. These days, it’s hard to know what people mean by AGI. Does “AGI” mean a system that can answer math questions better than the average human? We already have that. Does it mean a system that can generate $100 billion in profit? Obvious legal fiction.
Sun-eating tracks a certain stage in AGI capability. Perhaps there are other concrete, material thresholds corresponding to convergent instrumental goals, which track earlier stages. These could provide more specific definitions for AGI-related forecasting.
Eating of the Sun is reversible, it’s letting it burn that can’t be reversed. The environmentalist option is to eat the Sun as soon as possible.
I think that it’s likely to take longer than 10000 years, simply because of the logistics (not the technology development, which the AI could do fast).
The gravitational binding energy of the sun is something on the order of 20 million years worth of its energy output. OK, half of the needed energy is already present as thermal energy, and you don’t need to move every atom to infinity, but you still need a substantial fraction of that. And while you could perhaps generate many times more energy than the solar output by various means, I’d guess you’d have to deal with inefficiencies and lots of waste heat if you try to do it really fast. Maybe if you’re smart enough you can make going fast work well enough to be worth it though?
I’m not sure what the details would look like, but I’m pretty sure ASI would have enough new technologies to figure something out within 10,000 years. And expending a bunch of waste heat could easily be worth it, if having more computers allows sending out Von Neumann probes faster / more efficiently to other stars. Since the cost of expending the Sun’s energy has to be compared with the ongoing cost of other stars burning.
I feel like this is the main load-bearing claim underlying the post, but it’s barely argued for.
In some sense the sun is already “eating itself” by doing a fusion reaction, which will last for billions more years. So you’re claiming that AI could eat the sun (at least) six orders of magnitude faster, which is not obvious to me.
I don’t think my priors on that are very different from yours but the thing that would have made this post valuable for me is some object-level reason to upgrade my confidence in that.
Doesn’t have to expend the energy. It’s about reshaping the matter to machines. Computers take lots of mass-energy to constitute them, not to power them.
Things can go 6 orders of magnitude faster due to intelligence/agency, it’s not highly unlikely in general.
I agree that in theory the arguments here could be better. It might require knowing more physics than I do, and has the “how does Kasparov beat you at chess” problem.
If you can use 1kg of hydrogen to lift x>1kg of hydrogen using proton-proton fusion, you are getting exponential bulidup, limited only by “how many proton-proton reactors you can build in Solar system” and “how willing you are to actually build them”, and you can use exponential buildup to create all necessary infrastructure.
I think if you want to go fast, and you can eat the rest of the solar system, you can probably make a huge swarm of fusion reactors to help blow matter off the sun. Let’s say you can build 10^11-watt reactors that work in space. Then you need about 10^15 of them to match the sun. If each is 10^6 kg, this is about 10^-4 of Mercury’s mass.
If our story goes well, we might want to preserve our Sun for sentimental reasons.
We might even want to eat some other stars just to prevent the Sun from expanding and dying.
I would maybe want my kids to look up at a night sky somewhere far away and see a constellation with the little dot humanity came from still being up there.
Concrete existence, they point out, is less resource efficient than dreams of the machine. Hard to tell how much value is tied up in physical form and not computation, if humans would agree on this either way on reflection.
Do we have some basic physical-feasibility insights on this or you just speculate?
It’s a pretty straightforward modification of the Caplan thruster. You scoop up bits of sun with very strong magnetic fields, but rather than fusing it and using it to move a star, you cool most of it (firing some back with very high velocity to balance things momentum wise) and keep the matter you extract (or fuse some if you need quick energy). There’s even a video on it! Skip to 4:20 for the relevant bit.
I was expecting (Methods start 16:00)
The action space is too large for this to be infeasible, but at a 101 level, if the Sun spun fast enough it would come apart, and angular momentum is conserved so it’s easy to add gradually.
Mostly speculation based on tech level. But:
To the extent temperature is an issue, energy can be used to transfer temperature from one place to another.
Maybe matter from the Sun can be physically expelled into more manageable chunks. The Sun already ejects matter naturally (though at a slow rate).
Nanotech in general (cell-like, self-replicating robots).
High energy availability with less-speculative tech like Dyson spheres.
This seemed like a nice explainer post, though it’s somewhat confusing who the post is for – if I imagine being someone who didn’t really understand any arguments about superintelligence, I think I might bounce off the opening paragraph or title because I’m like “why would I care about eating the sun.”
There is something nice and straightforward about the current phrasing but suspect there’s an opening paragraph that would do a better job explaining why you might care about this.
(But I’d be curious to hear from people who weren’t really sold on any singularity stuff who read it and can describe how it was for them)
I think partially it’s meant to go from some sort of abstract model of intelligence as a scalar variable that increases at some rate (like, on a x/y graph) to concrete, material milestones. Like, people can imagine “intelligence goes up rapidly! singularity!” and it’s unclear what that implies, I’m saying sufficient levels would imply eating the sun, that makes it harder to confuse with things like “getting higher scores on math tests”.
I suppose a more general category would be, the relevant kind of self-improving intelligence would be the sort that can re-purpose mass-energy to creating more computation that can run its intelligence, and “eat the Sun” is an obvious target given this background notion of intelligence.
(Note, there is skepticism about feasibility on Twitter/X, that’s some info about how non-singulatarians react)
Ignoring such confusion is good for hardening the frame where the content is straightforward. It’s inconvenient to always contextualize, refusing to do so carves out the space for more comfortable communication.
I was already sold on singularity. For what it’s worth I found the post and comments very helpful for why you would want to take the sun apart in the first place and why it would be feasible and desirable for superintelligent and non-superintelligent civilization (Turning the sun into a smaller sun that doesn’t explode seems nicer than having it explode. Fusion gives off way more energy than lifting the material. Gravity is the weakest of the 4 forces after all. In a superintelligent civilization with reversible computers, not taking apart the sun will make readily available mass a taut constraint).
I might write a top level post or shortform about this at some point. I find it baffling how casually people talk about dismantling the Sun around here. I recognize that this post makes no normative claim that we should do it, but it doesn’t say that it would be bad either, and expects that we will do it even if humanity remains in power. I think we probably won’t do it if humanity remains in power, we shouldn’t do it, and if humanity disassembles the Sun, it will probably happen for some very bad reason, like a fanatical dictatorship getting in power.
If we get some even vaguely democratic system that respects human rights at least a little, then many people (probably the vast majority) will want to live on Earth in their physical bodies and many will want to have children, and many of those children will also want to live on Earth and have children on their own. I find it unlikely that all subcultures that want this will die out on Earth in 10,000 years, especially considering the selections effects: the subcultures that prefer to have natural children on Earth are the ones that natural selection favors on Earth. So the scenarios when humanity dismantles the Sun probably involve a dictatorship rounding up the Amish and killing them while maybe uploading their minds somewhere, against all their protestation. Or possibly rounding up the Amish, and forcibly “increasing their intelligence and wisdom” by some artificial means, until they realize that their “coherent extrapolated volition” was in agreement with the dictatorship all along, and then killing off their bodies after their new mind consents. I find this option hardly any better. (Also, it’s not just the Amish you are hauling to the extermination camps kicking and screaming, but my mother too. And probably your mother as well. Please don’t do that.)
Also, I think the astronomical waste is probably pretty negligible. You can probably create a very good industrial base around the Sun with just some Dyson swarm that doesn’t take up enough light to be noticeable from Earth. And then you can just send out some probes to Alpha Centauri and the other neighboring stars to dismantle them if we really want to. How much time do we lose by this? My guess is at most a few years, and we probably want to take some years anyway to do some reflection before we start any giant project.
People sometimes accuse the rationalist community of being aggressive naive utilitarians, who only believe that the AGI is going to kill everyone, because they are only projecting themselves to it, as they also want to to kill everyone if they get power, so they can establish their mind-uploaded, we-are-the-grabby-aliens, turn-the-stars-into-computronium utopia a few months earlier that way. I think this accusation is mostly false, and most rationalists are in fact pretty reasonable and want to respect other people’s rights and so on. But when I see people casually discussing dismantling the Sun, with only one critical comment (Mikhail’s) that we shouldn’t do it, and it shows up in Solstice songs as a thing we want to do in the Great Transhumanist Future twenty years from now, I start worrying again that the critics are right, and we are the bad guys.
I prefer to think that it’s not because people are in fact happy about massacring the Amish and their own mothers, but because dismantling the Sun is a meme, and people don’t think through what it means. Anyway, please stop.
(Somewhat relatedly, I think it’s not obvious at all that if a misaligned AGI takes over the world, it will dismantle the Sun. It is more likely to do it than humanity would, but still, I don’t know how you could be any confident that the misaligned AI that first takes over will be the type of linear utilitarian optimizer that really cares about conquering the last stars at the edge of the Universe, so needs to dismantle the star in order to speed up its conquest with a few years.)