The Sun is the most nutritious thing that’s reasonably close. It’s only 8 light-minutes away, yet contains the vast majority of mass within 4 light-years of the Earth. The next-nearest star, Proxima Centauri, is about 4.25 light-years away.
By “nutritious”, I mean it has a lot of what’s needed for making computers: mass-energy. In “Ultimate physical limits to computation”, Seth Lloyd imagines an “ultimate laptop” which is the theoretically best computer that is 1 kilogram of mass, contained in 1 liter. He notes a limit to calculations per second that is proportional to the energy of the computer, which is mostly locked in its mass (E = mc²). Such an energy-proportional limit applies to memory too. Energy need not be expended quickly in the course of calculation, due to reversible computing.
So, you need energy to make computers out of (much more than you need energy to power them). And, within 4 light-years, the Sun is where almost all of that energy is. Of course, we don’t have the technology to eat the Sun, so it isn’t really our decision to make. But, when will someone or something be making this decision?
Artificial intelligence that is sufficiently advanced could do everything a human could do, better and faster. If humans could eventually design machines that eat the Sun, then sufficiently advanced AI could do so faster. There is some disagreement about “takeoff speeds”, that is, the time between when AI is about as intelligent as humans, to when it is far far more intelligent.
My argument is that, when AI is far far more intelligent than humans, it will understand the Sun as the most nutritious entity that is within 4 light-years, and eat it within a short time frame. It really is convergently instrumental to eat the Sun, in the sense of repurposing at least 50% its mass-energy to make machines including computers and their supporting infrastructure (“computronium”).
I acknowledge that some readers may think the Sun will never be eaten. Perhaps it sounds like sci-fi to them. Here, I will argue that Sun-eating is probable within the next 10,000 years.
Technological development has a ratchet effect: good technologies get invented, but usually don’t get lost, unless they weren’t very important/valuable (compared to other available technologies). Empirically, the rate of discovery seems to be increasing. To the extent pre-humans even had technology, it was developed a lot more slowly. Technology seems to be advancing a lot faster in the last 1000 years than it was from 5000 BC to 4000 BC. Part of the reason for the change in rate is that technologies build on other technologies; for example, the technology of computers allows discovery of other technologies through computational modeling.
So, we are probably approaching a stage where technology develops very quickly. Eventually, the rate of technology development will go down, due to depletion of low-hanging fruit. But before then, in the regime where technology is developing very rapidly, it will be both feasible and instrumentally important to run more computations, quickly. Computation is needed to research technologies, among other uses. Running sufficiently difficult computations requires eating the Sun, and will be feasible at some technology level, which itself probably doesn’t require eating the Sun (eating the Earth probably provides more than enough energy to have enough computational power to figure out the technology to eat the Sun).
Let’s further examine the motive for creating many machines, including computers, quickly. Roughly, we can consider two different regimes of fast technology development: coordinated and uncoordinated.
-
A coordinated regime will act like a single agent (or “singleton”), even if it’s composed of multiple agents. This regime would do some kind of long-termist optimization (in this setting, even a few years is pretty long-term). Of course, it would want to discover technology quickly, all else being equal (due to astronomical waste considerations). But it might be somewhat “environmentalist” in terms of avoiding making hard-to-reverse decisions, like expending a lot of energy. I still think it would eat the Sun, on the basis that it can later convert these machines to other machines, if desired (it has access to many technologies, after all).
-
In an uncoordinated regime, multiple agents compete for resources and control. Broadly, having more machines (including computers) and more technology grants a competitive advantage. That is a strong incentive to turn the Sun into machines and develop technologies quickly. Perhaps an uncoordinated regime can transition to a coordinated one, as either there is a single victor, or the most competitive players start coordinating.
This concludes the argument that the Sun will be largely eaten in the next 10,000 years. It really will be a major event in the history of the solar system. Usually, not much happens to the Sun in 10,000 years. And I really think I’m being conservative in saying 10,000. This would in typical discourse be considered “very long ASI timelines”, under the assumption that ASI eats the Sun within a few years.
Thinking about the timing of Sun-eating seems more well-defined, and potentially precise, than thinking about the timeline of “human-level AGI” or “ASI”. These days, it’s hard to know what people mean by AGI. Does “AGI” mean a system that can answer math questions better than the average human? We already have that. Does it mean a system that can generate $100 billion in profit? Obvious legal fiction.
Sun-eating tracks a certain stage in AGI capability. Perhaps there are other concrete, material thresholds corresponding to convergent instrumental goals, which track earlier stages. These could provide more specific definitions for AGI-related forecasting.
Eating of the Sun is reversible, it’s letting it burn that can’t be reversed. The environmentalist option is to eat the Sun as soon as possible.
I might write a top level post or shortform about this at some point. I find it baffling how casually people talk about dismantling the Sun around here. I recognize that this post makes no normative claim that we should do it, but it doesn’t say that it would be bad either, and expects that we will do it even if humanity remains in power. I think we probably won’t do it if humanity remains in power, we shouldn’t do it, and if humanity disassembles the Sun, it will probably happen for some very bad reason, like a fanatical dictatorship getting in power.
If we get some even vaguely democratic system that respects human rights at least a little, then many people (probably the vast majority) will want to live on Earth in their physical bodies and many will want to have children, and many of those children will also want to live on Earth and have children on their own. I find it unlikely that all subcultures that want this will die out on Earth in 10,000 years, especially considering the selections effects: the subcultures that prefer to have natural children on Earth are the ones that natural selection favors on Earth. So the scenarios when humanity dismantles the Sun probably involve a dictatorship rounding up the Amish and killing them while maybe uploading their minds somewhere, against all their protestation. Or possibly rounding up the Amish, and forcibly “increasing their intelligence and wisdom” by some artificial means, until they realize that their “coherent extrapolated volition” was in agreement with the dictatorship all along, and then killing off their bodies after their new mind consents. I find this option hardly any better. (Also, it’s not just the Amish you are hauling to the extermination camps kicking and screaming, but my mother too. And probably your mother as well. Please don’t do that.)
Also, I think the astronomical waste is probably pretty negligible. You can probably create a very good industrial base around the Sun with just some Dyson swarm that doesn’t take up enough light to be noticeable from Earth. And then you can just send out some probes to Alpha Centauri and the other neighboring stars to dismantle them if we really want to. How much time do we lose by this? My guess is at most a few years, and we probably want to take some years anyway to do some reflection before we start any giant project.
People sometimes accuse the rationalist community of being aggressive naive utilitarians, who only believe that the AGI is going to kill everyone, because they are only projecting themselves to it, as they also want to to kill everyone if they get power, so they can establish their mind-uploaded, we-are-the-grabby-aliens, turn-the-stars-into-computronium utopia a few months earlier that way. I think this accusation is mostly false, and most rationalists are in fact pretty reasonable and want to respect other people’s rights and so on. But when I see people casually discussing dismantling the Sun, with only one critical comment (Mikhail’s) that we shouldn’t do it, and it shows up in Solstice songs as a thing we want to do in the Great Transhumanist Future twenty years from now, I start worrying again that the critics are right, and we are the bad guys.
I prefer to think that it’s not because people are in fact happy about massacring the Amish and their own mothers, but because dismantling the Sun is a meme, and people don’t think through what it means. Anyway, please stop.
(Somewhat relatedly, I think it’s not obvious at all that if a misaligned AGI takes over the world, it will dismantle the Sun. It is more likely to do it than humanity would, but still, I don’t know how you could be any confident that the misaligned AI that first takes over will be the type of linear utilitarian optimizer that really cares about conquering the last stars at the edge of the Universe, so needs to dismantle the star in order to speed up its conquest with a few years.)
Without making any normative arguments: if you’re in a position (industrially and technologically) to disassemble the sun at all, or build something like a Dyson swarm, then it’s probably not too difficult to build an artificial system to light the Earth in such a way as to mimic the sun, and make it look and feel nearly identical to biological humans living on the surface, using less than a billionth of the sun’s normal total light output. The details of tides might be tricky, but probably not out of reach.
You’re such a traditionalist!
More seriously, accusing rationalists of hauling the Amish and their mothers to camps doesn’t seem quite fair. Like you said, most rationalists seem pretty nice and aren’t proposing involuntary rapid changes. And this post certainly didn’t.
You’d need to address the actual arguments in play to write a serious post about this. “Don’t propose weird stuff” isn’t a very good argument. You could argue that went very poorly with communism, or come up with some other argument. Actually I think rationalists have come up with some. It looks to me like the more respected rationalists are pretty cautious about doing weird drastic stuff just because the logic seems correct at the time. See the unilateralist curse and Yudkiwky’s and other’s pleas that nobody do anything drastic about AGI even though they think it’s very likely going to kill us all.
This stuff is fun to think about, but it’s planning the victory party before planning how to win the war.
How to put the future into kind and rational hands seems like an equally interesting and much more urgent project right now. I’d be fine with a pretty traditional utopian future or a very weird one, but not fine with joyless machines eating the sun, or worse yet all of the suns they can reach.
So, I’m with you on “hey guys, uh, this is pretty horrifying, right? Uh, what’s with the missing mood about that.”
The thing is that not doing it is also horrifying. i.e. see also All Possible Views About Humanity’s Future Are Wild. To not eat the sun is to throw away orders of magnitude more resources than anyone has ever thrown away before. Is it percentage-wise “a small fraction of the cosmos?”. Maybe. But, (quickly checks Claude, which wrote up a fermi code snippet before answering, I can share the work if you want to doublecheck yourself), a two year delay would be… 200 galaxies lost, longterm.
When you compare “the Amish get a Sun Replica that doesn’t change their experience”, the question “Is it worth throwing away 80 trillion stars to have the real thing” is, like, not a trivial question.
IMO there isn’t an option that isn’t at least a bit horrifying in some sense that one could have a missing mood about. And while I still feel unsettled about it, I think if I have to grieve something, makes more sense to grieve in the direction of “don’t throw away 80 trillion stars worth of resources.”
I think you’re also maybe just not appreciating how much would change in 10,000 years? Like, there is no single culture that has survived 10,000 years. (Maybe one of those small tribes in the amazon? I’d still bet on there having been a lot of cultural drift there but not confidently). The Amish are only a few hundred years old. I can imagine doing a lot of moral reflection and coming to the conclusion the sun shouldn’t be eaten until all human cultures have decided it’s the right thing to do, but I do really doubt that process takes 10,000 years.
This (the Sun is the only important local source of … anything) has been an obvious conclusion for decades. Freeman Dyson described one avenue to capturing all the energy in 1960. The recent changes that make it more salient (and framed as “eating or inhabiting the sun” rather than “capturing the sun’s output”) is the recent progress in AI, which does two things:
Adds weight to the computational theory of mind. If everything important is “just” computation, then all this expense and complexity of human bodies and organic brains is temporary and will be unnecessary in the future. This simplifies the problem of HOW to eat the sun into just how to make computronium out of it.
Provides a more believable path for solving very hard engineering problems, by using smarter engineers than we can currently birth and train. It does NOT actually solve the problems, or even prove that they’re solvable. We don’t actually know what computation is (for this purpose), or how to optimize the entropy problem of “making one thing more ordered always makes other things less ordered”.
That said, I don’t know how to make beliefs on this scale pay any rent. “within 10,000 years” and “that’s just science fiction” are identical labels to me.
If our story goes well, we might want to preserve our Sun for sentimental reasons.
We might even want to eat some other stars just to prevent the Sun from expanding and dying.
I would maybe want my kids to look up at a night sky somewhere far away and see a constellation with the little dot humanity came from still being up there.
Concrete existence, they point out, is less resource efficient than dreams of the machine. Hard to tell how much value is tied up in physical form and not computation, if humans would agree on this either way on reflection.
I think that it’s likely to take longer than 10000 years, simply because of the logistics (not the technology development, which the AI could do fast).
The gravitational binding energy of the sun is something on the order of 20 million years worth of its energy output. OK, half of the needed energy is already present as thermal energy, and you don’t need to move every atom to infinity, but you still need a substantial fraction of that. And while you could perhaps generate many times more energy than the solar output by various means, I’d guess you’d have to deal with inefficiencies and lots of waste heat if you try to do it really fast. Maybe if you’re smart enough you can make going fast work well enough to be worth it though?
If you can use 1kg of hydrogen to lift x>1kg of hydrogen using proton-proton fusion, you are getting exponential bulidup, limited only by “how many proton-proton reactors you can build in Solar system” and “how willing you are to actually build them”, and you can use exponential buildup to create all necessary infrastructure.
I’m not sure what the details would look like, but I’m pretty sure ASI would have enough new technologies to figure something out within 10,000 years. And expending a bunch of waste heat could easily be worth it, if having more computers allows sending out Von Neumann probes faster / more efficiently to other stars. Since the cost of expending the Sun’s energy has to be compared with the ongoing cost of other stars burning.
I feel like this is the main load-bearing claim underlying the post, but it’s barely argued for.
In some sense the sun is already “eating itself” by doing a fusion reaction, which will last for billions more years. So you’re claiming that AI could eat the sun (at least) six orders of magnitude faster, which is not obvious to me.
I don’t think my priors on that are very different from yours but the thing that would have made this post valuable for me is some object-level reason to upgrade my confidence in that.
Doesn’t have to expend the energy. It’s about reshaping the matter to machines. Computers take lots of mass-energy to constitute them, not to power them.
Things can go 6 orders of magnitude faster due to intelligence/agency, it’s not highly unlikely in general.
I agree that in theory the arguments here could be better. It might require knowing more physics than I do, and has the “how does Kasparov beat you at chess” problem.
I think if you want to go fast, and you can eat the rest of the solar system, you can probably make a huge swarm of fusion reactors to help blow matter off the sun. Let’s say you can build 10^11-watt reactors that work in space. Then you need about 10^15 of them to match the sun. If each is 10^6 kg, this is about 10^-4 of Mercury’s mass.
Do we have some basic physical-feasibility insights on this or you just speculate?
It’s a pretty straightforward modification of the Caplan thruster. You scoop up bits of sun with very strong magnetic fields, but rather than fusing it and using it to move a star, you cool most of it (firing some back with very high velocity to balance things momentum wise) and keep the matter you extract (or fuse some if you need quick energy). There’s even a video on it! Skip to 4:20 for the relevant bit.
I was expecting (Methods start 16:00)
The action space is too large for this to be infeasible, but at a 101 level, if the Sun spun fast enough it would come apart, and angular momentum is conserved so it’s easy to add gradually.
A very heavy and dense body on an elliptical orbit that touches the Sun’s surface at each perihelion would collect sizable chunks of the Sun’s matter. The movement of matter from one star to another nearby star is a well-known phenomenon.
When the body reaches aphelion, the collected solar matter would cool down and could be harvested. The initial body would need to be very massive, perhaps 10-100 Earth masses. A Jupiter-sized core could work as such a body.
Therefore, to extract the Sun’s mass, one would need to make Jupiter’s orbit elliptical. This could be achieved through several heavy impacts or gravitational maneuvers involving other planets.
This approach seems feasible even without ASI, but it might take longer than 10,000 years.
Mostly speculation based on tech level. But:
To the extent temperature is an issue, energy can be used to transfer temperature from one place to another.
Maybe matter from the Sun can be physically expelled into more manageable chunks. The Sun already ejects matter naturally (though at a slow rate).
Nanotech in general (cell-like, self-replicating robots).
High energy availability with less-speculative tech like Dyson spheres.
I’m not sure eating the sun is such a great idea,
If the sun goes out suddenly, it’s a pretty clear tipoff that something major is happening over here. Anyone with preferences who sees that might worry about having to compete with whoever ate the sun. They could do something drastic.
Our offspring might conclude that anyone willing to do drastic things to strangers would already be going hard on spreading and eating suns, so it would only signal meaningfully to relatively peaceful types. But I’m not sure we could be sure. Someone might be hiding and doing drastic things to anyone who shows themselves, but doing drastic things in sneaky ways.
But it does seem like quite a shame to let most of the accessible universe just burn up because you’re paranoid about the neighbors.
It will be quite a dilemma unless there’s some compelling logic we’re missing so far or observations that will allow such logic. Which could be.
I think this shades into dark forest theory. Broadly my theory about aliens in general is that they’re not effectively hiding themselves, and we don’t see them because any that exist are too far away.
Partially it’s a matter of, if aliens wanted to hide, could they? Sure, eating a star would show up in terms of light patterns, but also, so would being a civilization at the scale of 2025-earth. And my argument is that these aren’t that far-off in cosmological terms (<10K years).
So, I really think alien encounters are in no way an urgent problem: we won’t encounter them for a long time, and if they get light from 2025-Earth, they’ll already have some idea that something big is likely to happen soon on Earth.
This seemed like a nice explainer post, though it’s somewhat confusing who the post is for – if I imagine being someone who didn’t really understand any arguments about superintelligence, I think I might bounce off the opening paragraph or title because I’m like “why would I care about eating the sun.”
There is something nice and straightforward about the current phrasing but suspect there’s an opening paragraph that would do a better job explaining why you might care about this.
(But I’d be curious to hear from people who weren’t really sold on any singularity stuff who read it and can describe how it was for them)
I think partially it’s meant to go from some sort of abstract model of intelligence as a scalar variable that increases at some rate (like, on a x/y graph) to concrete, material milestones. Like, people can imagine “intelligence goes up rapidly! singularity!” and it’s unclear what that implies, I’m saying sufficient levels would imply eating the sun, that makes it harder to confuse with things like “getting higher scores on math tests”.
I suppose a more general category would be, the relevant kind of self-improving intelligence would be the sort that can re-purpose mass-energy to creating more computation that can run its intelligence, and “eat the Sun” is an obvious target given this background notion of intelligence.
(Note, there is skepticism about feasibility on Twitter/X, that’s some info about how non-singulatarians react)
I was already sold on singularity. For what it’s worth I found the post and comments very helpful for why you would want to take the sun apart in the first place and why it would be feasible and desirable for superintelligent and non-superintelligent civilization (Turning the sun into a smaller sun that doesn’t explode seems nicer than having it explode. Fusion gives off way more energy than lifting the material. Gravity is the weakest of the 4 forces after all. In a superintelligent civilization with reversible computers, not taking apart the sun will make readily available mass a taut constraint).
Ignoring such confusion is good for hardening the frame where the content is straightforward. It’s inconvenient to always contextualize, refusing to do so carves out the space for more comfortable communication.
I agree with Richard Ngo and Simon that any dismantling of the sun is going to be a long-term project, and this matters.
What do you think Richard Ngo claimed about this?
That the argument that ASI could easily (relative to the sun’s own efforts) dismantle the star completely was barely argued for, and his priors weren’t moved much and he wanted object-level reasons to believe it was feasible to rapidly dismantle the sun.
Richard said “I don’t think my priors on that are very different from yours but the thing that would have made this post valuable for me is some object-level reason to upgrade my confidence in that.” He didn’t say it’d be a longterm project, I think he just meant he didn’t change his beliefs about it due to thist post.