On Eating the Sun

Link post

The Sun is the most nutritious thing that’s reasonably close. It’s only 8 light-minutes away, yet contains the vast majority of mass within 4 light-years of the Earth. The next-nearest star, Proxima Centauri, is about 4.25 light-years away.

By “nutritious”, I mean it has a lot of what’s needed for making computers: mass-energy. In “Ultimate physical limits to computation”, Seth Lloyd imagines an “ultimate laptop” which is the theoretically best computer that is 1 kilogram of mass, contained in 1 liter. He notes a limit to calculations per second that is proportional to the energy of the computer, which is mostly locked in its mass (E = mc²). Such an energy-proportional limit applies to memory too. Energy need not be expended quickly in the course of calculation, due to reversible computing.

So, you need energy to make computers out of (much more than you need energy to power them). And, within 4 light-years, the Sun is where almost all of that energy is. Of course, we don’t have the technology to eat the Sun, so it isn’t really our decision to make. But, when will someone or something be making this decision?

Artificial intelligence that is sufficiently advanced could do everything a human could do, better and faster. If humans could eventually design machines that eat the Sun, then sufficiently advanced AI could do so faster. There is some disagreement about “takeoff speeds”, that is, the time between when AI is about as intelligent as humans, to when it is far far more intelligent.

My argument is that, when AI is far far more intelligent than humans, it will understand the Sun as the most nutritious entity that is within 4 light-years, and eat it within a short time frame. It really is convergently instrumental to eat the Sun, in the sense of repurposing at least 50% its mass-energy to make machines including computers and their supporting infrastructure (“computronium”).

I acknowledge that some readers may think the Sun will never be eaten. Perhaps it sounds like sci-fi to them. Here, I will argue that Sun-eating is probable within the next 10,000 years.

Technological development has a ratchet effect: good technologies get invented, but usually don’t get lost, unless they weren’t very important/​valuable (compared to other available technologies). Empirically, the rate of discovery seems to be increasing. To the extent pre-humans even had technology, it was developed a lot more slowly. Technology seems to be advancing a lot faster in the last 1000 years than it was from 5000 BC to 4000 BC. Part of the reason for the change in rate is that technologies build on other technologies; for example, the technology of computers allows discovery of other technologies through computational modeling.

So, we are probably approaching a stage where technology develops very quickly. Eventually, the rate of technology development will go down, due to depletion of low-hanging fruit. But before then, in the regime where technology is developing very rapidly, it will be both feasible and instrumentally important to run more computations, quickly. Computation is needed to research technologies, among other uses. Running sufficiently difficult computations requires eating the Sun, and will be feasible at some technology level, which itself probably doesn’t require eating the Sun (eating the Earth probably provides more than enough energy to have enough computational power to figure out the technology to eat the Sun).

Let’s further examine the motive for creating many machines, including computers, quickly. Roughly, we can consider two different regimes of fast technology development: coordinated and uncoordinated.

  1. A coordinated regime will act like a single agent (or “singleton”), even if it’s composed of multiple agents. This regime would do some kind of long-termist optimization (in this setting, even a few years is pretty long-term). Of course, it would want to discover technology quickly, all else being equal (due to astronomical waste considerations). But it might be somewhat “environmentalist” in terms of avoiding making hard-to-reverse decisions, like expending a lot of energy. I still think it would eat the Sun, on the basis that it can later convert these machines to other machines, if desired (it has access to many technologies, after all).

  2. In an uncoordinated regime, multiple agents compete for resources and control. Broadly, having more machines (including computers) and more technology grants a competitive advantage. That is a strong incentive to turn the Sun into machines and develop technologies quickly. Perhaps an uncoordinated regime can transition to a coordinated one, as either there is a single victor, or the most competitive players start coordinating.

This concludes the argument that the Sun will be largely eaten in the next 10,000 years. It really will be a major event in the history of the solar system. Usually, not much happens to the Sun in 10,000 years. And I really think I’m being conservative in saying 10,000. This would in typical discourse be considered “very long ASI timelines”, under the assumption that ASI eats the Sun within a few years.

Thinking about the timing of Sun-eating seems more well-defined, and potentially precise, than thinking about the timeline of “human-level AGI” or “ASI”. These days, it’s hard to know what people mean by AGI. Does “AGI” mean a system that can answer math questions better than the average human? We already have that. Does it mean a system that can generate $100 billion in profit? Obvious legal fiction.

Sun-eating tracks a certain stage in AGI capability. Perhaps there are other concrete, material thresholds corresponding to convergent instrumental goals, which track earlier stages. These could provide more specific definitions for AGI-related forecasting.