Assuming large scale quantum computing is possible, then the ultimate computer is thus a reversible massively entangled quantum device operating at absolute zero. Unfortunately, such a device would be delicate to a degree that is hard to imagine—even a single misplaced high energy particle could cause enormous damage. In this model, advanced computational civilization would take the form of a compact body (anywhere from asteroid to planet size) that employs layers of sophisticated shielding to deflect as much of the incoming particle flux as possible. The ideal environment for such a device is as far away from hot stars as one can possibly go, and the farther the better. The extreme energy efficiency of advanced low temperature reversible/quantum computing implies that energy is not a constraint. These advanced civilizations could probably power themselves using fusion reactors for millions, if not billions, of years.
I don’t understand why this predicts no Dyson spheres, no visible mega-engineering, etc, and convergent self-limiting to a handful of solar systems and cold brains per civilization.
Computing near the Sun costs more because it’s hotter, sure. Fortunately, I understand that the Sun produces hundreds, even thousands of times more energy than a little fusion reactor does, so some inefficiencies are not a problem. You say that the reversible brains don’t need that much energy. OK, but more computing power is always better, the cold brains want as much as possible, so what limits them? If it’s energy, then they will want to pipe in as much energy as possible from their local star. If it’s putting matter into the right configuration for cold brains and shielding, then they will… want to pipe in as much matter lifted by energy as possible from their local star so they can build even more cold brains. Space is vast, so it’s not like they’re going to run out of cold places to put cold brains, and even if they do, well, a Dyson sphere around a star will fix that, so they’ll keep expanding with the matter & energy. Interconnects and IO use up a lot of energy? Well, we already know how to solve that. Whatever the binding limit to their computational power is, it seems to be solved by either more matter, more energy, or both, and the largest available source of both is stars, far from being ‘trash heaps’.
And since they are already expanding, their massive redundancy and deep space stealth/mobility means relativistic strikes are irrelevant, and so the usual first-mover expansionary convergent argument applies. So you should get a universe of Dyson spheres feeding out mass-energy to the surrounding cold brains who are constantly colonizing fresh systems for more mass-energy to compute in the voids with. This doesn’t sound remotely like a Fermi paradox resolution.
Computing near the Sun costs more because it’s hotter, sure. Fortunately, I understand that the Sun produces hundreds, even thousands of times more energy than a little fusion reactor does, so some inefficiencies are not a problem.
Every practical computational tech substrate has some error bounded compute/temperature curve, where computational capability quickly falls to zero past some upper bound temperature. Even for our current tech, computational capacity essentially falls off a cliff somewhere well below 1,000K.
My general point is that the really advanced computing tech shifts all those curves over—towards lower temperatures. This is a hard limit of physics, it can not be overcome. So for a really advanced reversible quantum computer that employs superconduction and long coherence quantum entanglement, 1K is just as impossible as 1,000K. It’s not entirely a matter of efficiency.
Another way of looking at it—advanced tech just requires lower temperatures—as temperature is just a measure of entropy (undesired/unmodeled state transitions). Temperature is literally an inverse measure of computational potential. The ultimate computer necessarily must have a temperature of zero.
You say that the reversible brains don’t need that much energy.
At the limits they need zero. Approaching anything close to those limits they have no need of stars. Not only that, but they couldn’t survive any energy influx much larger than some limit, and that limit necessarily must go to zero as their computational capacity approaches theoretical limits.
If it’s energy, then they will want to pipe in as much energy as possible from their local star.
No. There is an exact correct amount of energy to pipe in based on their viable operating temperature of their current tech civ. And this amount goes to zero as you advance up the tech.
It may help to consider applying your statement to our current planet civ. What if we could pipe in 10000x more energy than we currently receive from the sun. Wouldn’t that be great? No. It would cook the earth.
The same principle applies, but as you advance up the ultra-tech ladder, the temp ranges get lower and lower (because remember, temp is literally an inverse measure of maximum computational capabillity).
OK, but more computing power is always better, the cold brains want as much as possible, so what limits them?
Given some lump of matter, there is of course a maximum information storage capacity and a max compute rate—in a reversible computer the compute rate is bounded by the maximum energy density the system can structurally support which is just bounded by its mass. In terms of ultimate limits, it really depends on whether exotic options like creating new universes are practical or not. If creating new universes is feasible, there probably are no hard limits, all limits becomes soft.
So you should get a universe of Dyson spheres feeding out mass-energy to the surrounding cold brains who are constantly colonizing fresh systems for more mass-energy to compute in the voids with
Dyson spheres are extremely unlikely to be economically viable/useful, given the low value of energy past a certain tech level (vastly lower energy need per unit mass).
Cold brains need some mass, the question then is how the colonization value of mass varies across space. Mass that is too close to a star would need to be moved away from the star, which is very expensive.
So the most valuable mass that gets colonized first would be the rogue planets/nomads—which apparently are more common than attached planets.
If colonization continues long enough, it will spread to lower and lower valued real estate. So eventually smaller rocky bodies in the outer system get stripped away, slowly progressing inward.
The big unknown variable is again what the end of tech in the universe looks like, which gets back to that new universe creation question. If that kind of ultimate/magic tech is possible, civs will invest everything in to that, and you have less colonization, depending on the difficulty/engineering tradeoffs.
Given some lump of matter, there is of course a maximum information storage capacity and a max compute rate—in a reversible computer the compute rate is bounded by the maximum energy density the system can structurally support which is just bounded by its mass. In terms of ultimate limits, it really depends on whether exotic options like creating new universes are practical or not. If creating new universes is feasible, there probably are no hard limits, all limits becomes soft.
This still doesn’t answer my question. I understand your points about why colder is better, my question is: why don’t they expand constantly with ever more cold brains, which are collectively capable of ever more computation? My smartphone processor is more energy-efficient than my laptop, but that doesn’t mean datacenters don’t exist or are useless or aren’t popping up like mushrooms.
At the limits they need zero.
Correct me if I’m wrong, but zero energy consumption assumes both coldness and slowness, doesn’t it? Slowness is a problem for a superintelligence. What good is super-efficiency if it takes millennia to calculate answers which some more energy would have solved quicker? Time is not free.
It may help to consider applying your statement to our current planet civ. What if we could pipe in 10000x more energy than we currently receive from the sun. Wouldn’t that be great? No. It would cook the earth.
That would be great. If we had 10,000x more energy (and advanced technology etc), we could disassemble the Earth, move the parts around, and come up with useful structures to compute with it which would dissipate that energy productively. Turn it into a Matrioshka brain or something from one of Ander’s papers on optimal large-scale computing artifacts.
Dyson spheres are extremely unlikely to be economically viable/useful, given the low value of energy past a certain tech level (vastly lower energy need per unit mass). Cold brains need some mass, the question then is how the colonization value of mass varies across space. Mass that is too close to a star would need to be moved away from the star, which is very expensive.
Yes, it is expensive. Good thing we have a star right there to move all that mass with. Maybe its energy could be harnessed with some sort of enclosure....
If colonization continues long enough, it will spread to lower and lower valued real estate. So eventually smaller rocky bodies in the outer system get stripped away, slowly progressing inward.
Which ends in everything being used up, which even if all that planet engineering and moving doesn’t require Dyson spheres, is still inconsistent with our many observations of exoplanets and leaves the Fermi paradox unresolved.
I understand your points about why colder is better, my question is: why don’t they expand constantly with ever more cold brains, which are collectively capable of ever more computation?
At any point in development, investing resources in physical expansion has a payoff/cost/risk profile, as does investing resources in tech advancement. Spatial expansion offers polynomial growth, which is pretty puny compared to the exponential growth from tech advancement. Furthermore, the distances between stars are pretty vast.
If you plot our current trajectory forward, we get to a computational singularity long long before any serious colonization effort. Space colonization is kind of comical in it’s economic payoff compared to chasing Moore’s Law. So everything depends on what the endpoint of the tech singularity is. Does it actually end with some hard limit to tech? - If it does, and slow polynomial growth is the only option after that, then you get galactic colonization as the likely outcome. If the tech singularity leads to stronger outcomes ala new universe manipulations, then you never need to colonize, it’s best to just invest everything locally. And of course there is the spectrum in between, where you get some colonization, but the timescale is slowed.
Correct me if I’m wrong, but zero energy consumption assumes both coldness and slowness, doesn’t it?
No, not for reversible computing. The energy required to represent/compute a 1 bit state transition depends on reliability, temperature, and speed, but that energy is not consumed unless there is an erasure. (and as energy is always conserved, erasure really just means you lost track of a bit)
In fact the reversible superconducting designs are some of the fastest feasible in the near term.
That would be great. If we had 10,000x more energy (and advanced technology etc), we could disassemble the Earth, move the parts around, and come up with useful structures to compute with it which would dissipate that energy productively.
Biological computing (cells) doesn’t work at those temperatures, and all the exotic tech far past bio computers requires even lower temperatures. The temperatures implied by 10,000x energy density on earth preclude all life or any interesting computation.
Yes, it is expensive. Good thing we have a star right there to move all that mass with. Maybe its energy could be harnessed with some sort of enclosure....
I’m not all that confident that moving mass out system is actually better than just leaving it in place and doing best effort cooling in situ. The point is that energy is not the constraint for advancing computing tech, it’s more mass limited than anything, or perhaps knowledge is the most important limit. You’d never want to waste all that mass on a dyson sphere. All of the big designs are dumb—you want it to be as small, compact, and cold as possible. More like a black hole.
Which ends in everything being used up, which even if all that planet engineering and moving doesn’t require Dyson spheres, is still inconsistent with our many observations of exoplanets and
It’s extremely unlikely that all the matter gets used up in any realistic development model, even with colonization. Life did not ‘use up’ more than a tiny fraction of the matter of earth, and so on.
leaves the Fermi paradox unresolved.
From the evidence for mediocrity, the lower KC complexity of mediocrity, and the huge number of planets in the galaxy, I start with a prior strongly favoring reasonably high number of civs/galaxy, and low odds on us being first.
We have high uncertainty on the end/late outcome of a post-singularity tech civ (or at least I do, I get the impression that people here inexplicably have extremely high confidence in the stellavore expansionist model, perhaps because of lack of familiarity with the alternatives? not sure).
If post-singularity tech allows new universe creation and other exotic options, you never have much colonization—at least not in this galaxy, from our perspective. If it does not, and there is an eventual end of tech progression, then colonization is expected.
But as I argued above, even colonization could be hard to detect—as advanced civs will be small/cold/dark.
Transcension is strongly favored a priori for anthropic reasons—transcendent universes create far more observers like us. Then, updating on what we can see of the galaxy, colonization loses steam: our temporal rank is normal, whereas most colonization models predict we should be early .
For transcension, naturally its hard to predict what that means .. . but one possibility is a local ‘exit’ at least from the perspective of outside observers. Creation of lots of new universes, followed by physical civ-death in this universe, but effective immortality in new universes (ala game theoretic horse trading across the multiverse). New universe creation could also potentially alter physics in ways that permit further tech progression. Either way, all of the mass is locally invested/used up for ‘magic’ that is incomprehensibly more valuable than colonization.
If you plot our current trajectory forward, we get to a computational singularity long long before any serious colonization effort. Space colonization is kind of comical in it’s economic payoff compared to chasing Moore’s Law. So everything depends on what the endpoint of the tech singularity is. Does it actually end with some hard limit to tech? - If it does, and slow polynomial growth is the only option after that, then you get galactic colonization as the likely outcome.
So your entire argument boils down to another person who thinks transcension is universally convergent and this is the solution to the Fermi paradox? I don’t see what your reversible computing detour adds to the discussion, if you can’t show that making only a few cold brains sans any sort of cosmic engineering is universally convergent.
Biological computing (cells) doesn’t work at those temperatures, and all the exotic tech far past bio computers requires even lower temperatures. The temperatures implied by 10,000x energy density on earth preclude all life or any interesting computation.
I never said anything about using biology or leaving the Earth intact. I said quite the opposite.
It’s extremely unlikely that all the matter gets used up in any realistic development model, even with colonization. Life did not ‘use up’ more than a tiny fraction of the matter of earth, and so on.
You need to show your work here. Why is it unlikely? Why don’t they disassemble solar systems to build ever more cold brains? I keep asking this, and you keep avoiding it. Why is it better to have fewer cold brains rather than more? Why is it better to have less computational power than more? Why do all this intricate engineering for super-efficient reversible computers in the depths of the void, and only make a few and not use up all the local matter? Why are all the answers to these questions so iron-clad and so universally compelling that none of the trillions of civilizations you get from mediocrity will do anything different?
So your entire argument boils down to another person who thinks transcension is universally convergent and this is the solution to the Fermi paradox?
No . .. As I said above, even if transcension is possible, that doesn’t preclude some expansion. You’d only get zero expansion if transcension is really easy/fast. On the convergence issue, we should expect that the main development outcomes are completely convergent. Transcension is instrumentally convergent—it helps any realistic goals.
I don’t see what your reversible computing detour adds to the discussion, if you can’t show that making only a few cold brains sans any sort of cosmic engineering is universally convergent.
The reversible computing stuff is important for modeling the structure of advanced civs. Even in transcension models, you need enormous computation—and everything you could do with new universe creation is entirely compute limited. Understanding the limits of computing is important for predicting what end-tech computation looks like for both transcend and expand models. (for example if end-tech optimal were energy limited, this predicts dyson spheres to harvest solar energy)
The temperatures implied by 10,000x energy density on earth preclude all life or any interesting computation.
I never said anything about using biology or leaving the Earth intact. I said quite the opposite.
Advanced computation doesn’t happen at those temperatures, for the same basic reasons that advanced communication doesn’t work for extremely large values of noise in SNR. I was trying to illustrate the connection between energy flow and temperature.
You need to show your work here. Why is it unlikely? Why don’t they disassemble solar systems to build ever more cold brains? I keep asking this, and you keep avoiding it.
First let us consider the optimal compute configuration of a solar system without any large-scale re-positioning, and then we’ll remove that constraint.
For any solid body (planet,moon,asteroid,etc), there is some optimal compute design given it’s structural composition, internal temp, and incoming irradiance from the sun. Advanced compute tech doesn’t require any significant energy—so being closer to the sun is not an advantage at all. You need to expend more energy on cooling (for example, it takes about 15 kilowatts to cool a single current chip from earth temp to low temps, although there have been some recent breakthroughs in passive metamaterial shielding that could change that picture). So you just use/waste that extra energy cooling the best you can.
So, now consider moving the matter around. What would be the point of building a dyson sphere? You don’t need more energy. You need more metal mass, lower temperatures and smaller size. A dyson sphere doesn’t help with any of that.
Basically we can rule out config changes for the metal/rocky mass (useful for compute) that:
1.) increase temperature
2.) increase size
The gradient of improvement is all in the opposite direction: decreasing temperature and size (with tradeoffs of course).
So it may be worth while investing some energy in collecting small useful stuff (asteroids) into larger, denser computational bodies. It may even be worth while moving stuff farther from the star, but the specifics really depend on a complex set of unknowns.
One of the big unknowns of course being the timescale, which depends on the transcend issue.
Now for the star itself, it has most of the mass, but that mass is not really accessible, and most of it is in low value elements—we want more metals. It could be that the best use of that matter is to simply continue cooking it in the stellar furnace to produce more metals—as there is no other way, as far as i know.
But doing anything with the star would probably take a very long amount of time, so it’s only relevant in non-transcendent models.
In terms of predicted observations, in most of these models there are few if any large structures, but individual planetary bodies will probably be altered from their natural distributions. Some possible observables: lower than expected temperatures, unusual chemical distributions, and possibly higher than expected quantities/volumes of ejected bodies.
Some caveats: I don’t really have much of an idea of the energy costs of new universe creation, which is important for the transcend case. That probably is not a reversible op, and so it may be a motivation for harvesting solar energy.
There’s also KIC 8462852 of course. If we assume that it is a dyson swarm like object, we can estimate a rough model for civs in the galaxy. KIC 8462852 has been dimming for at least a century. It could represent the endphase of a tech civ, approaching it’s final transcend state. Say that takes around 1,000 years (vaguely estimating from the 100 years of data we have).
This dimming star is one out of perhaps 10 million nearby stars we have observed in this way. Say 1 in 10 systems will ever develop life, the timescale spread or deviation is about a billion years—then we should expect to observe about 1 in 10 million endphase dimming stars, given that phase lasts only 1,000 years. This would of course predict a large number of endstate stars, but given that we just barely detected KIC 8462852 because it was dimming, we probably can’t yet detect stars that already dimmed and then stabilized long ago.
Advanced computation doesn’t happen at those temperatures
Could it make sense to use an enormous amount of energy to achieve an enormous amount of cooling? Possibly using laser cooling or some similar technique?
Advanced computation doesn’t happen at those temperatures, for the same basic reasons that advanced communication doesn’t work for extremely large values of noise in SNR. I was trying to illustrate the connection between energy flow and temperature.
And I was trying to illustrate that there’s more to life than considering one cold brain in isolation in the void without asking any questions about what else all that free energy could be used for.
So, now consider moving the matter around. What would be the point of building a dyson sphere? You don’t need more energy. You need more metal mass, lower temperatures and smaller size. A dyson sphere doesn’t help with any of that.
A Dyson sphere helps with moving matter around, potentially with elemental conversion, and with cooling. If nothing else, if the ambient energy of the star is a big problem, you can use it to redirect the energy elsewhere away from your cold brains.
But doing anything with the star would probably take a very long amount of time, so it’s only relevant in non-transcendent models.
Exponential growth. I think Sandberg’s calculated you can build a Dyson sphere in a century, apropos of KIC 8462852′s oddly gradual dimming. And you hardly need to finish it before you get any benefits.
So it may be worth while investing some energy in collecting small useful stuff (asteroids) into larger, denser computational bodies. It may even be worth while moving stuff farther from the star, but the specifics really depend on a complex set of unknowns.
You say ‘may’, but that seems really likely. After all, what ‘complex set of unknowns’ will be so fine-tuned that the answer will, for all civilizations, be 0 rather than some astronomically large number? This is the heart of your argument! You need to show this, not handwave it! You cannot show that this resolves the Fermi paradox unless you make a solid case that cold brains will find harnessing solar systems’ energy and matter totally useless! As it stands, this article reads like ‘1. reversible computing is awesome 2. ??? 3. no expansion, hence, transcension 4. Fermi paradox solved!’ No, it’s not. Stop handwaving and show that more cold brains are not better, that there are zero uses for all the stellar energy and mass, and there won’t be any meaningful colonization or stellar engineering.
There’s also KIC 8462852 of course. If we assume that it is a dyson swarm like object, we can estimate a rough model for civs in the galaxy. KIC 8462852 has been dimming for at least a century. It could represent the endphase of a tech civ, approaching it’s final transcend state. Say that takes around 1,000 years (vaguely estimating from the 100 years of data we have).
Which is a highly dubious case, of course.
we probably can’t yet detect stars that already dimmed and then stabilized long ago.
I don’t see why the usual infrared argument doesn’t apply to them or KIC 8462852.
I don’t see why the usual infrared argument doesn’t apply to them or KIC 8462852.
If by infrared argument, you refer to the idea that a dyson swarm should radiate in the infrared, this is probably wrong. This relies on the assumption that the alien civ operates at earth temp of 300K or so. As you reduce that temp down to 3K, the excess radiation diminishes to something indistinguishable to the CMB, so we can’t detect large cold structures that way. For the reasons discussed earlier, non-zero operating temp would only be useful during initial construction phases, whereas near-zero temp is preferred in the long term. The fact that KIC 8462852 has no infrared excess makes it more interesting, not less.
A Dyson sphere helps with moving matter around, potentially with elemental conversion, and with cooling.
Moving matter—sure. But that would be a temporary use case, after which you’d no longer need that config, and you’d want to rearrange it back into a bunch of spherical dense computing planetoids.
potentially with elemental conversion
This is dubious. I mean in theory you could reflect/recapture star energy to increase temperature to potentially generate metals faster, but it seems to be a huge waste of mass for a small increase in cooking rate. You’d be giving up all of your higher intelligence by not using that mass for small compact cold compute centers.
If nothing else, if the ambient energy of the star is a big problem, you can use it to redirect the energy elsewhere away from your cold brains.
Yes, but that’s just equivalent to shielding. That only requires redirecting the tiny volume of energy hitting the planetary surfaces. It doesn’t require any large structures.
Exponential growth.
Exponential growth = transcend. Exponential growth will end unless you can overcome the speed of light, which requires exotic options like new universe creation or altering physics.
I think Sandberg’s calculated you can build a Dyson sphere in a century, apropos of KIC 8462852′s oddly gradual dimming. And you hardly need to finish it before you get any benefits.
Using self-replicating machinery the asteroid belt and minor moons could be converted into habitats in a few years, while disassembly of larger planets would take 10-1000 times longer (depending on how much energy and violence was used).
That’s a lognormal dist over several decades to several millenia. A dimming time for KIC 8462852 in the range of centuries to a millenia is a near perfect (lognormal) dist overlap.
So it may be worth while investing some energy in collecting small useful stuff (asteroids) into larger, denser computational bodies. It may even be worth while moving stuff farther from the star, but the specifics really depend on a complex set of unknowns.
You say ‘may’, but that seems really likely.
The recent advances in metamaterial shielding stuff suggest that low temps could be reached even on earth without expensive cooling, so the case I made for moving stuff away from the star for cooling is diminished.
Collecting/rearranging asteroids, and rearranging rare elements of course still remain as viable use cases, but they do not require as much energy, and those energy demands are transient.
After all, what ‘complex set of unknowns’ will be so fine-tuned that the answer will, for all civilizations, be 0 rather than some astronomically large number?
Physics. It’s the same for all civilizations, and their tech paths are all the same. Our uncertainty over those tech paths does not translate into a diversity in actual tech paths.
You cannot show that this resolves the Fermi paradox unless you make a solid case that cold brains will find harnessing solar systems’ energy and matter totally useless!
There is no ‘paradox’. Just a large high-D space of possibilities, and observation updates that constrain that space.
I never ever claimed that cold brains will “find harnessing solar systems’ energy and matter totally useless”, but I think you know that. The key question is what are their best uses for the energy/mass of a system, and what configs maximize those use cases.
I showed that reversible computing implies extremely low energy/mass ratios for optimal compute configs. This suggests that advanced civs in the timeframe 100 to 1000 years ahead of us will be mass-limited (specifically rare metal element limited) rather than energy limited, and would rather convert excess energy into mass rather than the converse.
Which gets me back to a major point: endgames. For reasons I outlined earlier, I think the transcend scenarios more likely. They have a higher initial prior, and are far more compatible with our current observations.
In the transcend scenarios, exponential growth just continues up until some point in the near future where exotic space-time manipulations—creating new universes or whatever—are the only remaining options for continued exponential growth. This leads to an exit for the civ, where from the outside perspective it either physically dies, disappears, or transitions to some final inert config. Some of those outcomes would be observable, some not. Mapping out all of those outcomes in detail and updating on our observations would be exhausting—a fun exercise for another day.
The key variable here is the timeframe from our level to the final end-state. That timeframe determines the entire utility/futility tradeoff for exploitation of matter in the system, based on ROI curves.
For example, why didn’t we start converting all of the useful matter of earth into babbage-style mechanical computers in the 19th century? Why didn’t we start converting all of the matter into vaccuum tube computers in the 50′s? And so on....
In an exponentially growing civ like ours, you always have limited resources, and investing those resources in replicating your current designs (building more citizens/compute/machines whatever) always has complex opportunity cost tradeoffs. You also are expending resources advancing your tech—the designs themselves—and as such you never expend all of your resources on replicating current designs, partly because they are constantly being replaced, and partly because of the opportunity costs between advancing tech/knowledge vs expanding physical infrastructure.
So civs tend to expand physically at some rate over time. The key question is how long? If transcension typically follows 1,000 years after our current tech level, then you don’t get much interstellar colonization bar a few probes, but you possibly get temporary dyson swarms. If it only takes 100 years, then civs are unlikely to even leave their home planet.
You only get colonization outcomes if transcension takes long enough, leading to colonization of nearby matter, which all then transcend roughly within the timeframe of their distance from the origin. Most of the nearby useful matter appears to be rogue planets, so colonization of stellar systems would take even longer, depending on how far down it is in the value chain.
And even in the non-transcend models (say the time to transcend is greater than millions of years), you can still get scenarios where the visible stars are not colonized much—if their value is really low, compared to abundant higher value cold dark matter (rogue planets, etc), colonization is slow/expensive, and the timescale spread over civ ages is low.
I don’t understand why this predicts no Dyson spheres, no visible mega-engineering, etc, and convergent self-limiting to a handful of solar systems and cold brains per civilization.
Computing near the Sun costs more because it’s hotter, sure. Fortunately, I understand that the Sun produces hundreds, even thousands of times more energy than a little fusion reactor does, so some inefficiencies are not a problem. You say that the reversible brains don’t need that much energy. OK, but more computing power is always better, the cold brains want as much as possible, so what limits them? If it’s energy, then they will want to pipe in as much energy as possible from their local star. If it’s putting matter into the right configuration for cold brains and shielding, then they will… want to pipe in as much matter lifted by energy as possible from their local star so they can build even more cold brains. Space is vast, so it’s not like they’re going to run out of cold places to put cold brains, and even if they do, well, a Dyson sphere around a star will fix that, so they’ll keep expanding with the matter & energy. Interconnects and IO use up a lot of energy? Well, we already know how to solve that. Whatever the binding limit to their computational power is, it seems to be solved by either more matter, more energy, or both, and the largest available source of both is stars, far from being ‘trash heaps’.
And since they are already expanding, their massive redundancy and deep space stealth/mobility means relativistic strikes are irrelevant, and so the usual first-mover expansionary convergent argument applies. So you should get a universe of Dyson spheres feeding out mass-energy to the surrounding cold brains who are constantly colonizing fresh systems for more mass-energy to compute in the voids with. This doesn’t sound remotely like a Fermi paradox resolution.
Every practical computational tech substrate has some error bounded compute/temperature curve, where computational capability quickly falls to zero past some upper bound temperature. Even for our current tech, computational capacity essentially falls off a cliff somewhere well below 1,000K.
My general point is that the really advanced computing tech shifts all those curves over—towards lower temperatures. This is a hard limit of physics, it can not be overcome. So for a really advanced reversible quantum computer that employs superconduction and long coherence quantum entanglement, 1K is just as impossible as 1,000K. It’s not entirely a matter of efficiency.
Another way of looking at it—advanced tech just requires lower temperatures—as temperature is just a measure of entropy (undesired/unmodeled state transitions). Temperature is literally an inverse measure of computational potential. The ultimate computer necessarily must have a temperature of zero.
At the limits they need zero. Approaching anything close to those limits they have no need of stars. Not only that, but they couldn’t survive any energy influx much larger than some limit, and that limit necessarily must go to zero as their computational capacity approaches theoretical limits.
No. There is an exact correct amount of energy to pipe in based on their viable operating temperature of their current tech civ. And this amount goes to zero as you advance up the tech.
It may help to consider applying your statement to our current planet civ. What if we could pipe in 10000x more energy than we currently receive from the sun. Wouldn’t that be great? No. It would cook the earth.
The same principle applies, but as you advance up the ultra-tech ladder, the temp ranges get lower and lower (because remember, temp is literally an inverse measure of maximum computational capabillity).
Given some lump of matter, there is of course a maximum information storage capacity and a max compute rate—in a reversible computer the compute rate is bounded by the maximum energy density the system can structurally support which is just bounded by its mass. In terms of ultimate limits, it really depends on whether exotic options like creating new universes are practical or not. If creating new universes is feasible, there probably are no hard limits, all limits becomes soft.
Dyson spheres are extremely unlikely to be economically viable/useful, given the low value of energy past a certain tech level (vastly lower energy need per unit mass).
Cold brains need some mass, the question then is how the colonization value of mass varies across space. Mass that is too close to a star would need to be moved away from the star, which is very expensive.
So the most valuable mass that gets colonized first would be the rogue planets/nomads—which apparently are more common than attached planets.
If colonization continues long enough, it will spread to lower and lower valued real estate. So eventually smaller rocky bodies in the outer system get stripped away, slowly progressing inward.
The big unknown variable is again what the end of tech in the universe looks like, which gets back to that new universe creation question. If that kind of ultimate/magic tech is possible, civs will invest everything in to that, and you have less colonization, depending on the difficulty/engineering tradeoffs.
This still doesn’t answer my question. I understand your points about why colder is better, my question is: why don’t they expand constantly with ever more cold brains, which are collectively capable of ever more computation? My smartphone processor is more energy-efficient than my laptop, but that doesn’t mean datacenters don’t exist or are useless or aren’t popping up like mushrooms.
Correct me if I’m wrong, but zero energy consumption assumes both coldness and slowness, doesn’t it? Slowness is a problem for a superintelligence. What good is super-efficiency if it takes millennia to calculate answers which some more energy would have solved quicker? Time is not free.
That would be great. If we had 10,000x more energy (and advanced technology etc), we could disassemble the Earth, move the parts around, and come up with useful structures to compute with it which would dissipate that energy productively. Turn it into a Matrioshka brain or something from one of Ander’s papers on optimal large-scale computing artifacts.
Yes, it is expensive. Good thing we have a star right there to move all that mass with. Maybe its energy could be harnessed with some sort of enclosure....
Which ends in everything being used up, which even if all that planet engineering and moving doesn’t require Dyson spheres, is still inconsistent with our many observations of exoplanets and leaves the Fermi paradox unresolved.
At any point in development, investing resources in physical expansion has a payoff/cost/risk profile, as does investing resources in tech advancement. Spatial expansion offers polynomial growth, which is pretty puny compared to the exponential growth from tech advancement. Furthermore, the distances between stars are pretty vast.
If you plot our current trajectory forward, we get to a computational singularity long long before any serious colonization effort. Space colonization is kind of comical in it’s economic payoff compared to chasing Moore’s Law. So everything depends on what the endpoint of the tech singularity is. Does it actually end with some hard limit to tech? - If it does, and slow polynomial growth is the only option after that, then you get galactic colonization as the likely outcome. If the tech singularity leads to stronger outcomes ala new universe manipulations, then you never need to colonize, it’s best to just invest everything locally. And of course there is the spectrum in between, where you get some colonization, but the timescale is slowed.
No, not for reversible computing. The energy required to represent/compute a 1 bit state transition depends on reliability, temperature, and speed, but that energy is not consumed unless there is an erasure. (and as energy is always conserved, erasure really just means you lost track of a bit)
In fact the reversible superconducting designs are some of the fastest feasible in the near term.
Biological computing (cells) doesn’t work at those temperatures, and all the exotic tech far past bio computers requires even lower temperatures. The temperatures implied by 10,000x energy density on earth preclude all life or any interesting computation.
I’m not all that confident that moving mass out system is actually better than just leaving it in place and doing best effort cooling in situ. The point is that energy is not the constraint for advancing computing tech, it’s more mass limited than anything, or perhaps knowledge is the most important limit. You’d never want to waste all that mass on a dyson sphere. All of the big designs are dumb—you want it to be as small, compact, and cold as possible. More like a black hole.
It’s extremely unlikely that all the matter gets used up in any realistic development model, even with colonization. Life did not ‘use up’ more than a tiny fraction of the matter of earth, and so on.
From the evidence for mediocrity, the lower KC complexity of mediocrity, and the huge number of planets in the galaxy, I start with a prior strongly favoring reasonably high number of civs/galaxy, and low odds on us being first.
We have high uncertainty on the end/late outcome of a post-singularity tech civ (or at least I do, I get the impression that people here inexplicably have extremely high confidence in the stellavore expansionist model, perhaps because of lack of familiarity with the alternatives? not sure).
If post-singularity tech allows new universe creation and other exotic options, you never have much colonization—at least not in this galaxy, from our perspective. If it does not, and there is an eventual end of tech progression, then colonization is expected.
But as I argued above, even colonization could be hard to detect—as advanced civs will be small/cold/dark.
Transcension is strongly favored a priori for anthropic reasons—transcendent universes create far more observers like us. Then, updating on what we can see of the galaxy, colonization loses steam: our temporal rank is normal, whereas most colonization models predict we should be early .
For transcension, naturally its hard to predict what that means .. . but one possibility is a local ‘exit’ at least from the perspective of outside observers. Creation of lots of new universes, followed by physical civ-death in this universe, but effective immortality in new universes (ala game theoretic horse trading across the multiverse). New universe creation could also potentially alter physics in ways that permit further tech progression. Either way, all of the mass is locally invested/used up for ‘magic’ that is incomprehensibly more valuable than colonization.
So your entire argument boils down to another person who thinks transcension is universally convergent and this is the solution to the Fermi paradox? I don’t see what your reversible computing detour adds to the discussion, if you can’t show that making only a few cold brains sans any sort of cosmic engineering is universally convergent.
I never said anything about using biology or leaving the Earth intact. I said quite the opposite.
You need to show your work here. Why is it unlikely? Why don’t they disassemble solar systems to build ever more cold brains? I keep asking this, and you keep avoiding it. Why is it better to have fewer cold brains rather than more? Why is it better to have less computational power than more? Why do all this intricate engineering for super-efficient reversible computers in the depths of the void, and only make a few and not use up all the local matter? Why are all the answers to these questions so iron-clad and so universally compelling that none of the trillions of civilizations you get from mediocrity will do anything different?
No . .. As I said above, even if transcension is possible, that doesn’t preclude some expansion. You’d only get zero expansion if transcension is really easy/fast. On the convergence issue, we should expect that the main development outcomes are completely convergent. Transcension is instrumentally convergent—it helps any realistic goals.
The reversible computing stuff is important for modeling the structure of advanced civs. Even in transcension models, you need enormous computation—and everything you could do with new universe creation is entirely compute limited. Understanding the limits of computing is important for predicting what end-tech computation looks like for both transcend and expand models. (for example if end-tech optimal were energy limited, this predicts dyson spheres to harvest solar energy)
Advanced computation doesn’t happen at those temperatures, for the same basic reasons that advanced communication doesn’t work for extremely large values of noise in SNR. I was trying to illustrate the connection between energy flow and temperature.
First let us consider the optimal compute configuration of a solar system without any large-scale re-positioning, and then we’ll remove that constraint.
For any solid body (planet,moon,asteroid,etc), there is some optimal compute design given it’s structural composition, internal temp, and incoming irradiance from the sun. Advanced compute tech doesn’t require any significant energy—so being closer to the sun is not an advantage at all. You need to expend more energy on cooling (for example, it takes about 15 kilowatts to cool a single current chip from earth temp to low temps, although there have been some recent breakthroughs in passive metamaterial shielding that could change that picture). So you just use/waste that extra energy cooling the best you can.
So, now consider moving the matter around. What would be the point of building a dyson sphere? You don’t need more energy. You need more metal mass, lower temperatures and smaller size. A dyson sphere doesn’t help with any of that.
Basically we can rule out config changes for the metal/rocky mass (useful for compute) that: 1.) increase temperature 2.) increase size
The gradient of improvement is all in the opposite direction: decreasing temperature and size (with tradeoffs of course).
So it may be worth while investing some energy in collecting small useful stuff (asteroids) into larger, denser computational bodies. It may even be worth while moving stuff farther from the star, but the specifics really depend on a complex set of unknowns.
One of the big unknowns of course being the timescale, which depends on the transcend issue.
Now for the star itself, it has most of the mass, but that mass is not really accessible, and most of it is in low value elements—we want more metals. It could be that the best use of that matter is to simply continue cooking it in the stellar furnace to produce more metals—as there is no other way, as far as i know.
But doing anything with the star would probably take a very long amount of time, so it’s only relevant in non-transcendent models.
In terms of predicted observations, in most of these models there are few if any large structures, but individual planetary bodies will probably be altered from their natural distributions. Some possible observables: lower than expected temperatures, unusual chemical distributions, and possibly higher than expected quantities/volumes of ejected bodies.
Some caveats: I don’t really have much of an idea of the energy costs of new universe creation, which is important for the transcend case. That probably is not a reversible op, and so it may be a motivation for harvesting solar energy.
There’s also KIC 8462852 of course. If we assume that it is a dyson swarm like object, we can estimate a rough model for civs in the galaxy. KIC 8462852 has been dimming for at least a century. It could represent the endphase of a tech civ, approaching it’s final transcend state. Say that takes around 1,000 years (vaguely estimating from the 100 years of data we have).
This dimming star is one out of perhaps 10 million nearby stars we have observed in this way. Say 1 in 10 systems will ever develop life, the timescale spread or deviation is about a billion years—then we should expect to observe about 1 in 10 million endphase dimming stars, given that phase lasts only 1,000 years. This would of course predict a large number of endstate stars, but given that we just barely detected KIC 8462852 because it was dimming, we probably can’t yet detect stars that already dimmed and then stabilized long ago.
Could it make sense to use an enormous amount of energy to achieve an enormous amount of cooling? Possibly using laser cooling or some similar technique?
And I was trying to illustrate that there’s more to life than considering one cold brain in isolation in the void without asking any questions about what else all that free energy could be used for.
A Dyson sphere helps with moving matter around, potentially with elemental conversion, and with cooling. If nothing else, if the ambient energy of the star is a big problem, you can use it to redirect the energy elsewhere away from your cold brains.
Exponential growth. I think Sandberg’s calculated you can build a Dyson sphere in a century, apropos of KIC 8462852′s oddly gradual dimming. And you hardly need to finish it before you get any benefits.
You say ‘may’, but that seems really likely. After all, what ‘complex set of unknowns’ will be so fine-tuned that the answer will, for all civilizations, be 0 rather than some astronomically large number? This is the heart of your argument! You need to show this, not handwave it! You cannot show that this resolves the Fermi paradox unless you make a solid case that cold brains will find harnessing solar systems’ energy and matter totally useless! As it stands, this article reads like ‘1. reversible computing is awesome 2. ??? 3. no expansion, hence, transcension 4. Fermi paradox solved!’ No, it’s not. Stop handwaving and show that more cold brains are not better, that there are zero uses for all the stellar energy and mass, and there won’t be any meaningful colonization or stellar engineering.
Which is a highly dubious case, of course.
I don’t see why the usual infrared argument doesn’t apply to them or KIC 8462852.
If by infrared argument, you refer to the idea that a dyson swarm should radiate in the infrared, this is probably wrong. This relies on the assumption that the alien civ operates at earth temp of 300K or so. As you reduce that temp down to 3K, the excess radiation diminishes to something indistinguishable to the CMB, so we can’t detect large cold structures that way. For the reasons discussed earlier, non-zero operating temp would only be useful during initial construction phases, whereas near-zero temp is preferred in the long term. The fact that KIC 8462852 has no infrared excess makes it more interesting, not less.
Moving matter—sure. But that would be a temporary use case, after which you’d no longer need that config, and you’d want to rearrange it back into a bunch of spherical dense computing planetoids.
This is dubious. I mean in theory you could reflect/recapture star energy to increase temperature to potentially generate metals faster, but it seems to be a huge waste of mass for a small increase in cooking rate. You’d be giving up all of your higher intelligence by not using that mass for small compact cold compute centers.
Yes, but that’s just equivalent to shielding. That only requires redirecting the tiny volume of energy hitting the planetary surfaces. It doesn’t require any large structures.
Exponential growth = transcend. Exponential growth will end unless you can overcome the speed of light, which requires exotic options like new universe creation or altering physics.
Got a link? I found this FAQ, where he says:
That’s a lognormal dist over several decades to several millenia. A dimming time for KIC 8462852 in the range of centuries to a millenia is a near perfect (lognormal) dist overlap.
The recent advances in metamaterial shielding stuff suggest that low temps could be reached even on earth without expensive cooling, so the case I made for moving stuff away from the star for cooling is diminished.
Collecting/rearranging asteroids, and rearranging rare elements of course still remain as viable use cases, but they do not require as much energy, and those energy demands are transient.
Physics. It’s the same for all civilizations, and their tech paths are all the same. Our uncertainty over those tech paths does not translate into a diversity in actual tech paths.
There is no ‘paradox’. Just a large high-D space of possibilities, and observation updates that constrain that space.
I never ever claimed that cold brains will “find harnessing solar systems’ energy and matter totally useless”, but I think you know that. The key question is what are their best uses for the energy/mass of a system, and what configs maximize those use cases.
I showed that reversible computing implies extremely low energy/mass ratios for optimal compute configs. This suggests that advanced civs in the timeframe 100 to 1000 years ahead of us will be mass-limited (specifically rare metal element limited) rather than energy limited, and would rather convert excess energy into mass rather than the converse.
Which gets me back to a major point: endgames. For reasons I outlined earlier, I think the transcend scenarios more likely. They have a higher initial prior, and are far more compatible with our current observations.
In the transcend scenarios, exponential growth just continues up until some point in the near future where exotic space-time manipulations—creating new universes or whatever—are the only remaining options for continued exponential growth. This leads to an exit for the civ, where from the outside perspective it either physically dies, disappears, or transitions to some final inert config. Some of those outcomes would be observable, some not. Mapping out all of those outcomes in detail and updating on our observations would be exhausting—a fun exercise for another day.
The key variable here is the timeframe from our level to the final end-state. That timeframe determines the entire utility/futility tradeoff for exploitation of matter in the system, based on ROI curves.
For example, why didn’t we start converting all of the useful matter of earth into babbage-style mechanical computers in the 19th century? Why didn’t we start converting all of the matter into vaccuum tube computers in the 50′s? And so on....
In an exponentially growing civ like ours, you always have limited resources, and investing those resources in replicating your current designs (building more citizens/compute/machines whatever) always has complex opportunity cost tradeoffs. You also are expending resources advancing your tech—the designs themselves—and as such you never expend all of your resources on replicating current designs, partly because they are constantly being replaced, and partly because of the opportunity costs between advancing tech/knowledge vs expanding physical infrastructure.
So civs tend to expand physically at some rate over time. The key question is how long? If transcension typically follows 1,000 years after our current tech level, then you don’t get much interstellar colonization bar a few probes, but you possibly get temporary dyson swarms. If it only takes 100 years, then civs are unlikely to even leave their home planet.
You only get colonization outcomes if transcension takes long enough, leading to colonization of nearby matter, which all then transcend roughly within the timeframe of their distance from the origin. Most of the nearby useful matter appears to be rogue planets, so colonization of stellar systems would take even longer, depending on how far down it is in the value chain.
And even in the non-transcend models (say the time to transcend is greater than millions of years), you can still get scenarios where the visible stars are not colonized much—if their value is really low, compared to abundant higher value cold dark matter (rogue planets, etc), colonization is slow/expensive, and the timescale spread over civ ages is low.