Taking the Bayesian view further, our posterior likelihood is the prior times the likelihood inferred from observations. You’re right that the prior must consist of very strong belief in the existence of aliens. However, an expanding alien civilization would be a very large, obvious, and distinctive spectacle, and we have seen no evidence of that so far. Thus it is not clear what our posterior belief must be.
An expanding stellavore civ would be very obvious, and the posterior for that possibility is thus diminished.
However there are many other possibilities. An expanding cold dark civ would be less obvious, and in fact we could already be looking at it.
There also the transcendent models, where all expansion is inward and post singularity civs rather quickly exit the galaxy in some manner—perhaps through new universe creation. That appears to be possible as far as physics is concerned, and it allows for continued exponential growth rather than the unappealing cubic growth you can get from physical expansion. Physical expansion would be enormous stagnation from our current growth perspective.
After updating on our observations the standard stellavore model becomes low probability relative to other future civ models.
Why couldn’t a civilization lead to both expanding and universe-exiting threads of evolution? Taking life on Earth as an analogy, it’s clear that life expands to fill all niches it can. A particular thread of evolution won’t stop occurring just because another thread has found a more optimal solution. In other words, it’s not a depth-first search, it’s a breadth-first search. Unless there’s a good reason for a civilization to not expand into space, it will probably expand into space.
It would seem very strange, then, that no expanding interstellar civilization has occurred.
Why couldn’t a civilization lead to both expanding and universe-exiting threads of evolution? Taking life on Earth as an analogy, it’s clear that life expands to fill all niches it can.
Sure, but we are uncertain about everything, including what the niches for postbiological civs are. Physics suggest that computation is ultimately primarily entropy/temperature limited (rather than energy limited), and thus the niches for advanced civs could be in the cold dark interstellar material (which we know is more plentiful than the hot bright stuff). We don’t see stellavores for the same reasons that humanity isn’t interested in colonizing deep sea thermal vents (or underwater habitats in general).
So the stars could be the past—the ancient history of life, not it’s far future.
Unless there’s a good reason for a civilization to not expand into space, it will probably expand into space.
In the cold dark models, the galaxy is already colonized, and the evidence is perhaps already in front of us . ..
In this model the physical form of alien civs is likely to be in compact cold objects that are beyond current tech to image directly. The most likely chance to see them is during construction, which would be more energetically expensive and thus could take place near a star—perhaps the WTF star is a civ in transition to elder status.
The WOW signal was an alien radar ping, similar to what aliens would see from the radar pings that we use for planetary radar imaging with arecibo.
Aliens most likely have already visited sol at various points, but for them it is something like the ocean floor is to us—something of minor interest for scientific study.
On that note, it’s starting to look like the emDrive and kin are real. If that is true, it is additional evidence for aliens. Why? Because the earliest and most credible modern UFO reports—such as the Kenneth Arnold sighting—are most consistent with craft that is vaguely areodynamic but does not rely on areodynamic principles for thrust. The arnold report contains rather specific details of the craft’s speed and acceleration, lack of contrail, etc. As we know more about future engineering capabilities for atmospheric craft, that report could become rather strong evidence indeed (or not).
Unless there’s a good reason for a civilization to not expand into space, it will probably expand into space.
In the transcendent models, civs use all available resources to expand inward, because that allows for continued exponential growth. Transcendent civs don’t expand outward because it is always an exceptionally poor use of resources. Notice that that is true today—we could launch an interstellar colony ship for some X trillions, but spending those resources on Moore’ls Law is vastly preferred. In the transcendent model, this just continues to be true indefinitely—likely ending in hard singularities, strange machines that create new universes, etc.
Finally, the distribution over various alien civs are not really statistically independent, even if they developed independently. Our uncertainty is at the model level in terms of how physics and future engineering works. The particular instance variables of each civ don’t matter so much. So if the cold dark model is correct, all civs look like that, if the transcendent model is correct all civs look like that, etc.
In the cold dark models, the galaxy is already colonized, and the evidence is perhaps already in front of us . ..
The hypothesis that dark matter could be comprised of cold clumps of matter has been considered (these objects are called MACHOs) and as far as I know this hypothesis has been largely ruled out as they have properties that aren’t consistent with how dark matter actually behaves.
I also think you’re making an unfounded assumption here—that advanced civilizations could be stealthy. But what we know suggests that there ain’t no stealth in space. There are a number of difficulties in keeping large energy-consuming objects cold, and even if you succeeded in keeping the brains themselves cold, the associated support equipment and fusion reactors that you mention would be pretty hot. And the process of constructing the brains would be very hot.
The hypothesis that dark matter could be comprised of cold clumps of matter has been considered (these objects are called MACHOs) and as far as I know this hypothesis has been largely ruled out as they have properties that aren’t consistent with how dark matter actually behaves.
Unrelated. There is baryonic and non-baryonic dark matter. Most of the total dark matter is currently believed to be non-baryonic, but even leaving that aside the amount of baryonic dark matter is still significant—perhaps on par or greater than the baryonic visible matter. Most important of all is the light/dark ratio of heavier element baryonic matter and smaller planets/planetoids. There are some interesting new results suggesting most planets/planetoids are free floating rather than bound to stars (see links in my earlier article—“nomads of the galaxy” etc).
There is a limit to how big a giant computing device can get before gravitational heating makes the core unusable—the ideal archilect civ may be small, too small to detect directly. But perhaps they hitch rides orbiting larger objects.
Also, we don’t know enough about non-baryonic dark matter/energy to rule it out as having uses or a relation to elder civs (although it seems unlikely, but still—there are a number of oddities concerning the whole dark energy inflation model).
I also think you’re making an unfounded assumption here—that advanced civilizations could be stealthy. . ..There are a number of difficulties in keeping large energy-consuming objects cold,
Well we are talking about hypothetical post-singularity civs . . ..
There doesn’t appear to be any intrinsic limit to computational energy efficiency with reversible computing, and practicality of advanced quantum computing appears to be proportional to how close one can get to absolute zero and how long one can maintain that for coherence.
So at the limits, computational civs approach CMB temperature and use negligible energy for computation. At some point it becomes worthwhile to spend some energy to move away from stars.
Any model makes some assumptions based on what aspects of engineering/physics we believe will still hold into the future. The article you linked makes rather huge assumptions—aliens civs need to travel around in ships, ships can only move by producing thrust, etc. Even then from what I understand detecting thrust is only possible at in-system distances, not light year distances.
The cold dark alien model i favor simply assumes advanced civs will approach physical limits.
The CMB temperature (2.7 K) is still very warm in relative terms and it’s hard to see how effective large-scale quantum computing could be done at that temperature (current crude quantum computers operate at millikelvin temperatures and still have only very miniscule levels of coherence). The only way to get around this is to either use refrigeration to cool down the system (leading to a very hot fusion reactor and refrigeration equipment) or make do with 2.7 K, which would probably lead to a lot of heat dissipation.
You would absorb a large amount of entropy from the CMB at this temperature (about 1000 terabytes per second per square meter); you’d need to compensate for this entropy to keep your reversible computer working.
The CMB is just microwave radiation right? So reflective shielding can block most of that. What are the late engineering limits for microwave reflective coatings? With superconducting surfaces, metamaterials, etc?
Some current telescopes cool down subcomponents to very low temperatures without requiring large fusion reactors.
If the physical limits of passive shielding are non-generous, this just changes the ideal designs to use more active cooling than they otherwise would and limit the ratio of quantum computing stuff to other stuff—presumably there is always some need for active cooling and that is part of the energy budget, but that budget can still be very small and the final device temperature could even be less than CMB.
The CMB is just microwave radiation right? So reflective shielding can block most of that.
I’m afraid it can’t. The ‘shielding’ itself would soon reach equilibrium with the CMB and begin emitting at 2.7 K. It makes no difference what it’s made of. You can’t keep an object cooler than the background temperature indefinitely without expending energy. If you could, you would violate conservation of energy.
And, again, the process of generating that energy would produce a lot of heat and preclude stealth.
Some current telescopes cool down subcomponents to very low temperatures without requiring large fusion reactors.
But the gross mass of the telescope is never lower than (or even equal to) the background temperature. JWST, for instance, is designed for 50 K operating temperature (which emits radiation at about 100,000 times the background level according to the Stefan-Boltzmann law).
If the physical limits of passive shielding are non-generous, this just changes the ideal designs to use more active cooling than they otherwise would and limit the ratio of quantum computing stuff to other stuff
Again, this would just make the problem worse, as a decrease in entropy in one part of the system must be balanced by a larger increase in entropy elsewhere. I’m talking about the possibility of stealth here (while maintaining large-scale computation).
but that budget can still be very small and the final device temperature could even be less than CMB.
This is a non-obvious statement to me. It seems that a computation on the level you’re describing (much larger in scale than the combined brainpower of current human civilization by orders of magnitude) would require a large amount of mass and/or energy and would thus create a very visible heat signature. It would be great if you could offer some calculations to back up your claim.
Years ago I had the idea that advanced civilizations can radiate waste heat into black holes instead of interstellar space, which would efficiently achieve much lower temperatures and also avoid creating detectable radiation signatures. See http://www.weidai.com/black-holes.txt and my related LW post.
The recent news about KIC 8462852 immediately reminded me of your old txt file article. I’m really curious what you think about the recent information given how much you seem to have thought about advanced civs.
I’m afraid it can’t. The ‘shielding’ itself would soon reach equilibrium with the CMB and begin emitting at 2.7 K.
EDIT: After updating through this long thread, I am now reasonably confident that the above statement is incorrect. Passive shielding in the form of ice can cool the earth against’s the sun’s irradiance to a temp lower than the black body temp, and there is nothing special about the CMB irradiance. See the math here at the end of the thread.
Sure—if it wasn’t actively cooled, but of course we are assuming active cooling. The less incoming radiation the system absorbs, the less excess heat it has to deal with.
It makes no difference what it’s made of. You can’t keep an object cooler than the background temperature indefinitely without expending energy. If you could, you would violate conservation of energy.
Sure you need to expend energy, but obviously the albedo/reflectivity matters a great deal. Do you know what the physical limits for reflectivity are? For example—if the object’s surface can reflect all but 10^-10 of the incoming radiation, then the active cooling demands are reduced in proportion, correct?
I’m talking about the possibility of stealth here (while maintaining large-scale computation).
I’m thinking just in terms of optimal computers, which seems to lead to systems that are decoupled from the external environment (except perhaps gravitationally), and thus become dark matter.
would require a large amount of mass and/or energy and would thus create a very visible heat signature.
The limits of reversible computing have been discussed in the lit, don’t have time to review it here, but physics doesn’t appear to impose any hard limit on reversible efficiency. Information requires mass to represent it and energy to manipulate it, but that energy doesn’t necessarily need to be dissipated into heat. Only erasure requires dissipation. Erasure can be algorithmically avoided by recycling erased bits as noise fed into RNGs for sampling algorithms. The bitrate of incoming sensor observations must be matched by an outgoing dump, but that can be proportionally very small.
I think you’re still not ‘getting it’, so to speak. You’ve acknowledged that active cooling is required to keep your computronium brain working. This is another way of saying you expend energy to remove entropy from some part of the system (at the expense of a very large increase in entropy in another part of the system). Which is what I said in my previous reply. However you still seem to think that, given this consideration, stealth is possible.
By the way, the detection ranges given in that article are for current technology! Future technology will probably be much, much better. It’s physically possible, for instance, to build a radio telescope consisting of a flat square panel array of antennas one hundred thousand kilometers on a side. Such a telescope could detect things we can’t even imagine with current technology. It could resolve an ant crawing on the surface of pluto or provide very detailed surface maps of exoplanets. Unlike stealth, there is no physical limit that I can think of to how large you can build a telescope.
but physics doesn’t appear to impose any hard limit on reversible efficiency
Not theoretically, no. However, at any temperature higher than 0 K, purely reversible computing is impossible. Unfortunately there is nowhere in the universe that is that cold, and again, maintaining this cold temperature requires a constant feed of energy. These considerations impose hard, nonzero limits on power consumption. Performing meaningful computations with arbitrarily small power consumption is impossible in our universe.
You’re repeatedly getting very basic facts about physics and computation wrong. I love talking about physics but I don’t have the time or energy to keep debating these very basic concepts, so this will probably be my last reply.
No—because you didn’t actually answer my question, and you are conflating the reversible computing issue with the stealth issue.
I asked:
Do you know what the physical limits for reflectivity are? For example—if the object’s surface can reflect all but 10^-10 of the incoming radiation, then the active cooling demands are reduced in proportion, correct?
The energy expended and entropy produced for cooling is proportional to the incoming radiation absorbed, correct? And this can be lowered arbitrarily with reflective shielding—or is that incorrect? Nothing whatsoever to do with stealth, the context of this discussion concerns only optimal computers.
Not theoretically, no. However, at any temperature higher than 0 K, purely reversible computing is impossible.
Don’t understand this—the theory on rev computing says that energy expenditure is proportional to bit erasure, plus whatever implementation efficiency. The bit erasure cost varies with temperature sure, but you could still theoretically have a rev computing working at 100K.
You seem to be thinking that approaching zero energy production requires zero temperature—no. Low temperature reduces the cost of bit erasure, but bit erasure itself can also be reduced to arbitrarily low levels with algorithmic level recycling.
These considerations impose hard, nonzero limits on power consumption.
Which are?
You’re repeatedly getting very basic facts about physics and computation wrong.
Such as? Am I incorrect in the assumption that the cost of active cooling is proportional to the temperature or entropy to remove and thus the incoming radiation absorbed—and thus can be reduced arbitrarily with shielding?
Thermal power absorbed by system: P = σAT^4 (J/s)
Entropy absorbed by system: X = P / (T k_B log(2)) (bits/s)
Minimal amount of energy required to overcome this entropy: k_B T X * log(2) -- this happens to be equal to P.
Limit: External surface area of computer times σT^4.
As for active cooling, I think the burden of proof here is up to you to present a viable system and the associated calculations. How much energy does it take to keep a e.g. sphere of certain radius cold?
The thermal power you quoted is the perfect black body approximation. For a grey body, the thermal power is:
P = eoAT^4
where e is the material specific emissivity coefficient , and the same rule holds for absorption.
You seem to be implying that for any materials, there is a fundamental physical law which requires that absorption and emission efficiency is the same—so that a reflector which absorbs only e% of the incoming radiation is also only e% efficient at cooling itself through thermal emission.
Fine—even assuming that is the case, there doesn’t seem to be any hard limit to reflective efficiency. A hypothetical perfect whitebody which reflects all radiation perfectly would have no need of cooling by thermal emission—you construct the object (somewhere in deep space away from stars) and cool it to epsilon above absolute zero, and then it will remain that cold for the duration of the universe.
There is also current ongoing research into zero-index materials that may exhibit ‘super-reflection’. 1
If we can build super-conductors, then super-reflectors should be possible for advanced civs—a super conductor achieves a state of perfect thermal decoupling for electron interactions, suggesting that exotic material states could achieve perfect thermal decoupling for photon interactions.
So the true physical limit is for a perfect white body with reflectivity 1. The thermal power and entropy absorbed is zero, no active cooling required.
Furthermore, it is not clear at all that reflection efficiency must always equal emission efficiency.
Wavelength- and subwavelength-scale particles,[1] metamaterials,[2] and other nanostructures are not subject to ray-optical limits and may be designed to exceed the Stefan–Boltzmann law.
What do you make of that?
Also—I can think of a large number of apparent counter-examples to the rule that reflection and emission efficiency must be tied.
How do we explain greenhouse warming of the earth, snowball earth, etc? The temperature of the earth appears to mainly depend on it’s albedo, and the fraction of incoming light reflected doesn’t appear to be intrinsically related to the fraction of outgoing light, with separate mechanisms affecting each.
Or just consider a one-way mirror: it reflects light in one direction, but is transparent in the other. If you surround an object in a one-way mirror (at CMB infrared/microwave wavelengths) - wouldn’t it stay very cold as it can emit infrared but is protected from absorbing infrared? Or is this destined to fail for some reason?
I find nothing in the physics you have brought up to rule out devices with long term temperatures much lower than 2.7K—even without active cooling. Systems can be out of equilibrium for extremely long periods of time.
Again, you’re getting the fundamental and basic physics wrong. You’ve also evaded my question.
There is no such thing as a perfect whitebody. It is impossible. All those examples you mention are for narrow-band applications. Thermal radiation is wideband and occurs over the entire electromagnetic spectrum.
The piece in the wikipedia article links to papers such as http://arxiv.org/pdf/1109.5444.pdf in which thermal radiation (and absorption) are increased, not decreased!
Greenhouse warming of the Earth is an entirely different issue and I don’t see how it’s related. The Earth’s surface is fairly cold in comparison to the Sun’s.
I find nothing in the physics you have brought up to rule out devices with long term temperatures much lower than 2.7K—even without active cooling.
Well, firstly, you have to cool it down to below 2.7K in the first place. That most certainly requires ‘active cooling’. Then you can either let it slowly equilibrate or keep it actively cold. But then you have to consider the carnot efficiency of the cooling system (which dictates energy consumption goes up as e/T_c, where T_c is the temperature of the computer and e is the energy dissipated by the computer). So you have to consider precisely how much energy the computer is going to use at a certain temperature and how much energy it will take to maintain it at that temperature.
EDIT: You’ve also mentioned in that thread you linked that “Assuming large scale quantum computing is possible, then the ultimate computer is thus a reversible massively entangled quantum device operating at absolute zero.” Well, such a computer would not only be fragile, as you said, but it would also be impossible in the strong sense. It is impossible to reach absolute zero because doing so would require an infinite amount of energy: http://io9.com/5889074/why-cant-we-get-down-to-absolute-zero . For the exact same reason, it is impossible to construct a computer with full control over all the atoms. Every computer is going to have some level of noise and eventual decay.
Again, you’re getting the fundamental and basic physics wrong. You’ve also evaded my question.
Show instead of tell. I didn’t yet answer your question about the initial energy cost of cooling the sphere because it’s part of the initial construction cost and you haven’t yet answered my questions yet about reflectivity vs emisison and how it relates to temperature.
There is no such thing as a perfect whitebody. It is impossible.
Says what law—and more importantly—what is the exact limit then? Perfect super-conductivity may be impossible but there doesn’t appear to be an intrinsic limit to how close one can get, and the same appears to apply for super-reflection. This whole discussion revolves around modeling technologies approaching said limits.
All those examples you mention are for narrow-band applications. Thermal radiation is wideband and occurs over the entire electromagnetic spectrum.
This helps my case—the incoming radiation is narrow-band microwave from the CMB. The outgoing radiation can be across the spectrum.
The piece in the wikipedia article links to papers such as http://arxiv.org/pdf/1109.5444.pdf in which thermal radiation (and absorption) are increased, not decreased!
If the ‘law’ can be broken by materials which emit more than the law allows, this also suggests the ‘law’ can be broken in other ways as in super-reflectors.1
One-way mirrors do not exist.
Ok.
Greenhouse warming of the Earth is an entirely different issue and I don’t see how it’s related. The Earth’s surface is fairly cold in comparison to the Sun’s.
If the earth’s equilibrium temperature varies based on the surface albedo, this shows that reflectivity does matter and suggests a hypothetical super-reflector shielding for the CMB microwave could lead to lower than CMB temperatures. (because snow covering of the earth leads to lower equilibrium temperatures than a black-body at the same distance from the sun.)
Well, firstly, you have to cool it down to below 2.7K in the first place.
Do you? I’m not clear on that—you haven’t answered the earth counter example, which seems to show that even without active cooling, all it takes is albedo/reflectivity for an object’s equilibrium temperature to be lower than that of a black body in the same radiation environment. Is there something special about low temps like 2.7k?
That most certainly requires ‘active cooling’. Then you can either let it slowly equilibrate or keep it actively cold. But then you have to consider the carnot efficiency of the cooling system (which dictates energy consumption goes up as e/Tc, where Tc is the temperature of the computer and e is the energy dissipated by the computer).
Apparently coherence in current quantum computers requires millikelvin temperatures, which is why I’m focusing on the limits approaching 0K. And from what I understand this is fundamental—as the limits of computing involve very long large coherent states only possible at temperatures approaching 0.
If we weren’t considering quantum computing, then sure I don’t see any point to active cooling below 2.7K. The the energy cost of bit erasures is ~CTc for some constant C, but the cooling cost goes as e/Tc. So this effectively cancels out—you don’t get any net energy efficiency gain for cooling below the background temperature. (of course access to black holes much colder than the CMB changes that)
Well, such a computer would not only be fragile, as you said, but it would also be impossible in the strong sense. It is impossible ..
Yes—but again we are discussing limits analysis where said quantities approach zero, or infinity or whatever.
You can trivially prove this for yourself. High-energy gamma rays cannot be completely reflected by matter. All thermal radiation contains some high-energy gamma rays. Thus no material can perfectly reflect thermal radiation. QED.
This helps my case—the incoming radiation is narrow-band microwave from the CMB
No it’s not. CMB radiation spans the entire EM spectrum. Thermal radiation is almost the exact opposite of narrow-band radiation.
If the ‘law’ can be broken by materials which emit more than the law allows
It’s not really broken though. It’s just that radiation in these materials happens through mechanisms beyond conventional blackbody radiation. A common LED emits radiation far in excess of its thermal radiation. This doesn’t mean that Stefan-Boltzmann is ‘broken’, it just means that an extra emission mechanism is working. A mechanism that requires free energy to run (unlike normal thermal radiation which requires no free energy). And sure enough, if you read that paper the extra mechanism requires extra free energy.
But you can’t use an extra emission mechanism to reduce the emitted raditation.
all it takes is albedo/reflectivity for an object’s equilibrium temperature to be lower than that of a black body in the same radiation environment.
Specifically, the limiting factor is not temperature at all but error rate of your computer hardware, quantum or not. The ultimate limit to efficiency is set by the error rate, not the temperature at which you can cool the system to.
High-energy gamma rays cannot be completely reflected by matter.
For any system, even exotic? By what law? A simple google search seems to disagree—gamma rays are reflected today, in practice, (albeit with difficulty and inefficiently) by multilayer reflectors.
No it’s not. CMB radiation spans the entire EM spectrum.
The vast majority of the energy peaks in microwave frequencies, but fine yes there is always some emission in higher frequencies—practical shielding would be complex and multilayer.
You keep making this same mistake. Thermal equilibrium temperature does not depend on surface reflectivity.
You keep bringing this up, but you can’t explain how it applies to some basic examples such as the earth. How can you explain the fact that the temperature of planets such as earth, venus varies greatly and depends mostly on their albedo? Is it because the system is not in equilibrium? Then who cares about equilibrium? It almost never applies.
If the earth/sun system is not in equilibrium, then my hypothetical reflective object somewhere in deep space receiving radiation only from the CMB is certainly not in equilibrium either.
And finally the universe itself is expanding and is never in equilibrium—the CMB temperature is actually decaying to zero over time.
Until I see a good explanation of planetary albedo and temperature, I can’t take your claim of “basic physics mistake” seriously.
Read that of course, and I’d recommend some of Mike Frank’s stuff over it.1 Obviously the energy cost of bit erasure is the same for all types of computing. Quantum computing is different only in having much lower error/noise/temp tolerances due to decoherence issues.
The ultimate limit to efficiency is set by the error rate, not the temperature at which you can cool the system to.
These are directly linked.
Heat is just thermal noise. And noise and errors are fundamentally the same—uncertainty over states that can explode unless corrected. The error rate for the most advanced computers is absolutely limited by thermal noise (and quantum noise).
This is trivially obvious at extremes—ie the error rate of a computer at 10000K is 100% for most materials. The lowest error rates are only achievable by exotic matter configurations at very low temperatures.
The idealized perfect computer is one with zero entropy—ie ever quantum state stores meaningful information, and every transition at every time step is a planned computation.
Looking at it another way, using devices and transitions larger than the absolute physical limits is just an easy way to do error correction to handle thermal noise.
I still can’t understand why you think the Earth system is representative here.… are you asking why the Earth isn’t the same temperature as the Sun? Or the same temperature as the background of space? Because if you remove any one, it would equilibrate with the other. But you’re proposing to put your system in deep space where there is only the background. If you did that to Earth, you’d find it would very rapidly equilibrate to close to 2.7 K, and the final temperature is irrespective of surface albedo.
Albedo doesn’t have any relationship with final temperature. Only speed at which equilibrium is reached.
Again, I don’t feel like I have to ‘explain’ anything here… perhaps you could explain, in clearer terms, why you think it bears any relationship to the system we are discussing?
Read that of course, and I’d recommend some of Mike Frank’s stuff over it.1
It’s great that you’ve read those, unfortunately it seems you haven’t understood them at all.
These are directly linked.
Not in the way you probably think. Error rate depends on hardware design as well as temperature. You’re confusing a set of concepts here. As errors are generated in the computation, the entropy (as measured internally) will increase, and thus the heat level will increase. If this is what you are saying, you are correct. But the rate of generation of these errors (bits/s) is not the same as the instantaneous system entropy (bits) - they’re not even the same unit! You could have a quantum computer at infinitesimally low temperature and it would still probably generate errors and produce heat.
This is really just another way of saying that your computer is not 100% reversible (isentropic). This is because of inevitable uncertainties in construction (is the manufacturing process that created the computer itself a perfectly error-free computer? If so, how was the first perfectly error-free computer constructed?), uncertainties in the physics of operation, and inevitable interaction with the outside world. If you claim you can create a perfectly isentropic computer, then the burden of proof is on you to demonstrate such a system. You can’t expect me to take it on faith that you can build a perfectly reversible computer!
I still can’t understand why you think the Earth system is representative here.… are you asking why the Earth isn’t the same temperature as the Sun? Or the same temperature as the background of space?
I honestly can’t understand how you can’t understand it. :)
1.
Take a spherical body and place it in a large completely empty universe. The body receives zero incoming radiation, but it emits thermal radiation until it cools to zero or something close to that—agreed? (quantum noise fluctuations or virtual particles perhaps impose some small nonzero temp, not sure) Albedo/reflectivity doesn’t matter because there is no incoming radiation. Materials with higher emissivity will cool to zero faster.
2.
Spherical body in an empty universe that contains a single directional light source that is very far away. The light source is not effected by the body in any significant way and does not prevent the body from emitting radiation. The source and the body will never reach equilibrium in the timescales we care about. The body absorbs radiation according to it’s albedo and the incoming flux. The body emits radiation according to temperature and emissivity. It will evolve to a local equilibrium temperature that depends on these parameters.
3.
The earth sun system—it is effectively equivalent to 2. The sun is not infinitely far away, but as far as the earth’s temp is concerned the sun is just a photon source—the earth has no effect on the sun’s temp and the objects are not in equilibrium. This situation is equivalent to 2.
We have hard data for situation 3 which shows that the balance between incoming radiation absorbed vs outgoing radiation emitted can differ for a complex composite object based on alebdo/reflectivity vs emissitivity. The end result is the object’s local equilibrium temperature depends on these material parameters and can differ significantly from that of the black body temperature for the same input irradiance conditions.
4.
Object in deep space. It receives incoming radiation from the CMB—which is just an infinite omnidirectional light source like 3. The directionality shouldn’t change anything, the energy spectrum shouldn’t change anything, so it’s equivalent to 3 and 2. The object’s resting temperature can be lower than the CMB blackbody ‘temperature’ (which after all isn’t the temp of an actual object, it’s tautologically just the temperature of a simple blackbody absorbing/emitting the CMB).
So what am I missing here? - seriously—still waiting to see how #3 could possibly differ.
Whatever principle it is that allows the earth’s resting temp to vary based on surface albedo can be exploited to passively cool the earth, and thus can be exploited to passively cool other objects.
Google is now good enough that it gets some useful hits for “temperature lower than the CMB”. In particular on this thread from researchgate I found some useful info. Most of the discussion is preoccupied with negative temps, but one or two of the replies agree with my interpretation and they are unchallenged:
Rüdiger Mitdank · Humboldt-Universität zu Berlin:
The cosmic Background Radiation is in a very good approximation a black Body Radiation. Every Body which is in a thermal Equilibrium with this Radiation source has this temperature. If you have another Radiation sources, usually hotter bodies like suns, the temperature increases.
If due to the surface reflectivity the Absorption is low, the Body temperature approximates to a lower value, that Emission and Absorption are equal. Therefore it might be possible, that bodies consisting of ice or snow and having a clean surface, have a temperature below cosmic Background temperature. This occurs only far away from any other Radiation source.
I would look for comets out of our sun system.
Because if you remove any one, it would equilibrate with the other. But you’re proposing to put your system in deep space where there is only the background. If you did that to Earth, you’d find it would very rapidly equilibrate to close to 2.7 K, and the final temperature is irrespective of surface albedo.
The final temp for the earth in the (earth, sun, background) ‘equilibrium’ does depend on the surface albedo.
Error rate depends on hardware design as well as temperature. You’re confusing a set of concepts here. As errors are generated in the computation, the entropy (as measured internally) will increase, and thus the heat level will increase. If this is what you are saying, you are correct.
I recommended Frank’s work because it has the most clear unifying explanations of computational entropy/information. A deterministic computer is just an approximation—real systems are probabilistic (and quantum) and eventually we will move to those models of computation. The total entropy is always conserved, with some of entropy budget being the usable computational bits(qbits) and some being unknown/error bits such as thermal noise (but this generalization can also cover quantum noise). The ‘erasure’ of a bit really is just intentional randomization.
The idea of a hard error comes from the deterministic approximation, which assumes that the state of every bit is exactly known. In a prob circuit, we have instead a distribution over bit states, and circuit ops transform these distributions.
This is really just another way of saying that your computer is not 100% reversible (isentropic). This is because of inevitable uncertainties in construction (is the manufacturing process that created the computer itself a perfectly error-free computer? If so, how was the first perfectly error-free computer constructed?), uncertainties in the physics of operation,
Uncertainty in the construction can be modeled as a learning/inference problem. Instead of simple deterministic circuits, think of learning probabilistic circuits (there are no ‘errors’ so to speak, just distributions and various types of uncertainty). As inference/learning reduces uncertainty over variables of interest, reversible learning must generate an equivalent amount of final excess noise/garbage bits. Noise bits in excess of the internal desired noise bit reserve would need to be expelled - this is the more sophisticated form of cooling.
and inevitable interaction with the outside world.
Each device has an IO stream that is exactly bit conserved and thus reversible from it’s perspective. Same principle that applies to each local circuit element applies to each device.
The final limitation is incoming entropy—noise from the outside world. This inflow must be balanced by a matching bit outflow from the internal noise bit reserves. This minimal noise flow (temperature) places ultimate limits on the computational capability of the system in terms of SNR and thus (analog/probabilistic) bit ops.
For the moment I’m just going to ignore everything else in this debate (I have other time/energy committments...) and just focus on this particular question, since it’s one of the most fundamental questions we disagree on.
You are wrong, plain and simple. Rüdiger Mitdank is also wrong, despite his qualifications (I have equivalent qualifications, for that matter). Either that or he has failed to clearly express what he means.
If it were true that you could maintain an object colder than the background without consuming energy (just by altering surface absorption!), then you could have a free energy device. Just construct a heat engine with one end touching the object and the other end being a large black radiator.
If it were true that you could maintain an object colder than the background without consuming energy (just by altering surface absorption!),
Yea—several examples from wikipedia for the temperature of a planet indicate that albedo and emissivity can differ (it’s implied on this page, directly stated on this next page).
Here under effective temperature they have a model for a planet’s surface temperature where the emissivity is 1 but the albedo can be greater than 0.
Notice that if you plug in an albedo of 1 into that equation, you get a surface temperature of 0K!
The generalized stefan boltzmann law is thus the local equilibrium where irradiance/power absorbed equals irradiance/power emitted:
J_a = J_e
J_a = J_in*(1 - a)
J_e = eoT^4
T = (J_in (1-a) / (eo)) ^-4
J_in is the incoming irradiance from the light source, a is the material albedo, e is the material emissivity, o is SB const, T is temp.
This math comes directly from the wikpedia page, I’ve just converted from power units to irradiance. replacing the star’s irradiance term of L/(16 PI D^2) with a constant for an omni light source (CMB).
On retrospect, one way I could see this being wrong is if the albedo and emissivity are always required to be the same for a particular wavelength. In the earth example the albedo of relevance is for high energy photons from the sun whereas the relevant emissivity is lower energy infrared. Is that your explanation?
then you could have a free energy device. Just construct a heat engine with one end touching the object and the other end being a large black radiator.
Hmm perhaps, but I don’t see how that’s a ‘free’ energy device.
The ‘background’ is a virtual/hypothetical object anyway—the CMB actually is just a flux of photons. The concept of temperature for photons and the CMB is contrived—defined tautologically based on an ideal black body emitter. The actual ‘background temperature’ for a complex greybody in the CMB depends on albedo vs emissivity—as shown by the math from wikipedia.
One can construct a heat engine to extract solar energy using a reflective high albedo (low temp resevoir) object and a low albedo black object. Clearly this energy is not free, it comes from the sun. There is no fundamental difference between photons from the sun and photons from the CMB, correct?
So in theory the same principle should apply, unless there is some QM limitation at low temps like 2.7K. Another way you could be correct is if the low CMB temp is somehow ‘special’ in a QM sense. I suggested that earlier but you didn’t bite. For example, if the CMB represents some minimal lower barrier for emittable photon energy, then the math model I quoted from wikipedia then breaks down at these low temps.
But barring some QM exception like that, the CMB is just like the sun—a source of photons.
I’ve never seen someone so confused about the basic physics.
Let’s untangle these concepts.
Effective temperature is not actual temperature. It’s merely the temperature of a blackbody with the same emitted radiation power. As such, it depends on two assumptions:
The emitted power is thermal in origin,
The emission spectrum is the ideal blackbody spectrum.
Of course if these assumptions aren’t true then the temperature estimate is going to be wrong. Going back to my LED example, a glowing LED might have an ‘effective temperature’ of thousands of degrees K. This doesn’t mean anything at all.
The source of your confusion could be that emitted and received radiation sometimes have different spectra. This is indeed true. It’s true of the Earth, for instance. But at equilibrium, absorption and emission are exactly equal at all wavelengths. Please read this: https://en.wikipedia.org/wiki/Kirchhoff%27s_law_of_thermal_radiation
Notice that if you plug in an albedo of 1 into that equation, you get a surface temperature of 0K!
Irrelevant, as I said. (The concept of albedo isn’t very useful for studying thermal equilibrium, I suggest you ignore it)
The generalized stefan boltzmann law is thus the local equilibrium where irradiance/power absorbed equals irradiance/power emitted:
Yes, this is the definition of being at the same temperature, if you didn’t know. (Assuming, of course, that the radiation is thermal in origin and radiation is the only heat transfer process at work, which it is in our example). If you disagree with this then you are simply wrong by definition and there is nothing more to say.
You seem to think that temperature is some concept that exists outside of thermal equilibrium. This is a very common mistake. Temperature is only defined for a system at thermal equilibrium, and when two objects are in thermal equilibrium with one another, they are by definition at the same temperature. It does not matter at all how fast their atoms are moving or what they are made of.
The concept of temperature for photons and the CMB is contrived—defined tautologically based on an ideal black body emitter.
No it’s not. It’s based on analysis of the spectrum, which is almost perfectly the spectrum of an ideal black body.
The ‘background’ is a virtual/hypothetical object anyway
The physics would be exactly the same if it were an actual sheet of black material at 2.7 K covering the universe.
The source of your confusion could be that emitted and received radiation sometimes have different spectra. This is indeed true. It’s true of the Earth, for instance. But at equilibrium, absorption and emission are exactly equal at all wavelengths. Please read this: https://en.wikipedia.org/wiki/Kirchhoff%27s_law_of_thermal_radiation
That was indeed a source of initial confusion as I stated above, and I read Kirchnoff’s Law. I said:
if the albedo and emissivity are always required to be the same for a particular wavelength. In the earth example the albedo of relevance is for high energy photons from the sun whereas the relevant emissivity is lower energy infrared. Is that your explanation?
However this still doesn’t explain how passive temps lower than 2.7K are impossible. Passive albedo cooling works for the earth because snow/ice is highly reflective (inefficient absorber/emitter) at the higher frequencies where most of the sun’s energy is concentrated, and yet it is still an efficient absorber/emitter at the lower infrared frequencies. - correct?
Now—what prevents the same principle for operating at lower temps? If ice can reflect efficiently at 500 nm and emit efficiently at 10um, why can’t some hypothetical object reflect efficiently at ~cm range CMB microwave and emit efficiently at even lower frequencies?
You said: “But at equilibrium, absorption and emission are exactly equal at all wavelengths.”
But clearly, this isn’t the case for the sun earth system—and the law according to wikipedia is wavelength dependent. So I don’t really understand your sentence.
You are probably going to say … 2nd law of thermodynamics, but sorry even assuming that said empirical law is actually axiomatically fundamental, I don’t see how it automatically rules out these scenarios.
Notice that if you plug in an albedo of 1 into that equation, you get a surface temperature of 0K!
Irrelevant, as I said. (The concept of albedo isn’t very useful for studying thermal equilibrium, I suggest you ignore it)
This isn’t an explanation, you still haven’t explained what is different in my examples.
The physics would be exactly the same if it were an actual sheet of black material at 2.7 K covering the universe.
Sure—but notice that it’s infinitely far away, so the concept of equilibrium goes out the window.
Also—aren’t black holes an exception? An object using a black hole as a heat sink could presumably achieve temps lower than 2.7K.
why can’t some hypothetical object reflect efficiently at ~cm range CMB microwave and emit efficiently at even lower frequencies?
I never said it can’t. Such a material is definitely possible. It just couldn’t passively reach lower temperature than the background. Assuming it’s far out in space, as it cooled down to 2.7 K, it would eventually reach the limit where its absorption and emission at all frequencies equalled the background (due to Kirchhoff’s law), and that’s where the temperature would stay. If it started out at a lower temperature (due to being cooled beforehand) it would absorb thermal radiation until its temperature equalled the background (again, this is directly due to how we define temperature), and again that’s where it would stay.
If you have a problem with that, take it up with Kirchhoff, not me :)
The Earth system is completely different here because neither the Earth is in thermal equilibrium with the Sun, nor is the Earth in thermal equilibrium with the background, nor is the Sun in thermal equilibrium with the background. There is a net transfer of thermal energy occurring from the sun to the earth to the background (yes the sun is heating up the background—but not by much though). And ‘net transfer of energy’ means no equilibrium. Sources that use ‘thermal equilibrium’ for the relationship between the Sun and the Earth are using the term loosely and incorrectly.
The situation here is far from equilibrium because of the massive amounts of energy that the sun is putting out. This is the very opposite of ‘passive’ operation.
why can’t some hypothetical object reflect efficiently at ~cm range CMB microwave and emit efficiently at even lower frequencies?
I never said it can’t. Such a material is definitely possible. It just couldn’t passively reach lower temperature than the background.
The ‘background ‘is just an incoming flux of photons. If you insist that this photon flux has a temperature, then it is obviously true that an object can have a lower temperature than this background flux, because an icy earth can have a lower temp than it’s background flux. As you yourself said earlier, temperature is only defined in terms of some equilibrium condition, and based on the equations that define temperature (below), the grey body ‘temperature’ for an irradiance distribution can differ from the black body temperature for the same distribution.
The math allows objects to shield against irradiance and achieve lower temps.
The general multispectral thermal emission of a grey body is just the black body emission function (planck’s law) scaled by the wavelength/frequency dependent emissivity function for the material (as this is how emissivity is defined).
G_{lambda,T}=epsilon_{lambda}B_{lambda,T}.
The outgoing thermal emission power for a grey body is thus:
}).
The object’s temperature is stable when the net energy emitted equals the net energy absorbed (per unit time), where the net absorption is just the irradiance scaled by the emissivity function.
Where E_{lambda}. is the incoming irradiance distribution as a function of wavelength, and the rest should be self-explanatory.
I don’t have a math package handy to solve this for T. Nonetheless, given the two inputs : the material’s emissivity and the incoming irradiance (both functions of wavelength), a simple numerical integration and optimization can find T solutions.
The only inputs to this math are the incoming irradiance distribution and the material’s emissivity/absorptivity distribution. There is no input labeled ‘background temperature’.
Assuming I got it right, this should model/predict the temps for earth with different ice/albedo/greenhouse situations (ignoring internal heating sources) - and thus obviously also should allow shielding against the background! All it takes is a material function which falls off heavily at frequencies before the mean of the input irradiance distribution.
This math shows that a grey body in space can have an equilibrium temperature less than a black body.
A hypothetical ideal low temp whitebody would have an e function that is close to zero across the CMB frequency range, but is close to 1 for frequencies below the CMB frequency range. For current shielding materials the temp would at best only a little lower than CMB black body temp due to the 4th root, but still—for some hypothetical material the temp could theoretically approach zero as emissivity across CMB range approaches zero. Put another way, the CMB doesn’t truly have a temperature of 2.7k—that is just the black body approx temp of the CMB.
If your position is correct, something must be wrong with this math—some extra correction is required—what is it?
Assuming it’s far out in space, as it cooled down to 2.7 K, it would eventually reach the limit where its absorption and emission at all frequencies equalled the background (due to Kirchhoff’s law), and that’s where the temperature would stay.
No—that isn’t what the math says—according to the functions above which define temperature for this situation. As I pointed out earlier 2.7K is just the black body approx temp of the CMB (defined as the temp of a black body at equilibrium with the CMB!), and the grey body temp can be lower. Kirchhoff’s law just says that the material absorptivity at each wavelength equals the emissivity at that wavelength. The wikipedia page even has an example with white paint analogous to the icy earth example.
There is a net transfer of thermal energy occurring from the sun to the earth to the background (yes the sun is heating up the background—but not by much though). And ‘net transfer of energy’ means no equilibrium. Sources that use ‘thermal equilibrium’ for the relationship between the Sun and the Earth are using the term loosely and incorrectly.
None of this word level logic actually shows up in the math. The CMB is just an arbitrary set of photons. Translating the actual math to your word logic, the CMB set can be split as desired into subsets based on frequency such that a material could shield against the higher frequencies and emit lower frequencies—as indicated by the math. There is thus a transfer between one subset of the CMB, the object, and another CMB subset.
Another possible solution in your twisted word logic: there is always some hypothetical surface at zero that is infinitely far away. Objects can radiate heat towards this surface—and since physics is purely local, the code/math for local photon interactions can’t possibly ‘know’ whether or not said surface actually exists. Or replace surface with vaccuum.
For your position to be correct, there must be something extra in the math not yet considered—such as some QM limitation on low emission energies.
You’re abusing the math here. You’ve written down the expanded form of the Stefan-Boltzmann equation, which assumes a very specific relationship between temperature and emitted spectrum (which you say yourself). Then you write the temperature in terms of everything else in the equation, and assume a completely different emitted spectrum that invalidates the original equation that you derived T from in the first place.
What you’re doing isn’t math, it’s just meaningless symbolic manipulation, and it has no relationship to the actual physics.
If you insist that this photon flux has a temperature,
Of course it does—this is a very very common and useful concept in physics—and when you say it doesn’t this betrays lack of familiarity with physics. Photon gas most certainly has temperature, in exactly the same way as a gas of anything else has temperature. Not only that, but it also has pressure and entropy.
In fact, in some situations, like an exploding hydrogen bomb, a photon gas has considerable temperature and pressure, far in excess of say the temperature in the center of the sun or the pressure in the center of the Earth.
You say yourself that you assume ‘local equilibrium with it’s incoming irradiance’. Firstly, you’re using the word ‘local’ incorrectly. I assume you mean ‘equilibrium at each wavelength’. If so, this is the very definition of being at the same temperature as the incoming irradiance (assuming everything here is thermal in nature) and, again, there is nothing more to say. http://physics.info/temperature/
You’re abusing the math here. You’ve written down the expanded form of the Stefan-Boltzmann equation, ..
EDIT: I fixed the equations above, replaced with the correct emission function for a grey body.
Yes I see—the oT^4 term on the left needs to be replaced with the black body emission function of wavelength and temperature—which I gather is just Planck’s Law. Still, I don’t (yet) see how that could change the general conclusion.
Where E_{lambda}. is the incoming irradiance distribution as a function of wavelength, and the rest should be self-explanatory. For any incoming irradiance spectrum, the steady state temperature will depend on the grey body emissivity function and in general will differ from that of a black body.
Photon gas most certainly has temperature, in exactly the same way as a gas of anything else has temperature.
I’m aware that the concept of temperature is applied to photons—but given that they do not interact this is something very different than temperature for colliding particles. The temperature and pressure in the examples such as the hydrogen bomb require interactions through intermediaries.
The definition of temperature for photon gas in the very page you linked involves a black body model due to the lack of photon-photon interactions—supporting my point about photon temps such as the CMB being defined in relation to the black body approx.
You say yourself that you assume ‘local equilibrium with it’s incoming irradiance’. Firstly, you’re using the word ‘local’ incorrectly
I meant local in the physical geometric sense—as in we are modelling only a local object and the space around it over small smallish timescale. The meaning of local equilibrium should thus be clear—the situation where net energy emitted equals net energy absorbed. This is the same setup as the examples from wikipedia.
I’m sorry that you’re so insistent on your incorrect viewpoint that you’re not even willing to listen to the obvious facts, which are really very simple facts.
Actually I’ve updated numerous times during this conversation—mostly from reading the relevant physics. I’ve also updated slightly on answers from physicists which reach my same conclusion.
I’ve provided the radiosity equations for a grey body in outer space where the temperature is driven only by the balance between thermal emission and incoming irradiance. There are no feedback effects between the emission and the irradiance, as the latter is fixed—and thus there is no thermodynamic equilibrium in the Kirchoff sense. If you still believe that you are correct, you should be able to show how that math is wrong and what the correct math is.
This grey body radiosity function should be able to model the temp of say an icy earth, and can show how that temp changes as the object is moved away from the sun such that the irradiance shifts from the sun’s BB spectrum to that of the CMB.
We know for a fact that real grey body objects can have local temps lower than the black body temp for the irradiance of an object near earth. The burden of proof is now on you to show how just changing the irradiance spectrum can somehow lead to a situation where all possible grey body materials have the same steady state temperature.
You presumably believe such math exists and that it will show the temp has a floor near 2.7K for any possible material emissivity function, but I don’t see how that could possibly work.
I assume you mean ‘equilibrium at each wavelength’
No. In retrospect I should not have used the word ‘equilibrium’.
As you yourself said, the earth is not in thermodynamic equilibrium with the sun, and this is your explanation as to why shielding works for the earth.
Replace the sun with a distant but focused light source, such as a large ongoing explosion. The situation is the same. The earth is never in equilibrium with the explosion that generated the photons.
The CMB is just the remnant of a long gone explosion. The conditions of thermodynamic equilibrium do not apply.
If you are correct then you should be able to show the math. Provide an equation which predicts the temp of an object in space only as a function of the incoming (spectral) irradiance and that object’s (spectral) emissivity.
Taking the Bayesian view further, our posterior likelihood is the prior times the likelihood inferred from observations. You’re right that the prior must consist of very strong belief in the existence of aliens. However, an expanding alien civilization would be a very large, obvious, and distinctive spectacle, and we have seen no evidence of that so far. Thus it is not clear what our posterior belief must be.
An expanding stellavore civ would be very obvious, and the posterior for that possibility is thus diminished.
However there are many other possibilities. An expanding cold dark civ would be less obvious, and in fact we could already be looking at it.
There also the transcendent models, where all expansion is inward and post singularity civs rather quickly exit the galaxy in some manner—perhaps through new universe creation. That appears to be possible as far as physics is concerned, and it allows for continued exponential growth rather than the unappealing cubic growth you can get from physical expansion. Physical expansion would be enormous stagnation from our current growth perspective.
After updating on our observations the standard stellavore model becomes low probability relative to other future civ models.
Why couldn’t a civilization lead to both expanding and universe-exiting threads of evolution? Taking life on Earth as an analogy, it’s clear that life expands to fill all niches it can. A particular thread of evolution won’t stop occurring just because another thread has found a more optimal solution. In other words, it’s not a depth-first search, it’s a breadth-first search. Unless there’s a good reason for a civilization to not expand into space, it will probably expand into space.
It would seem very strange, then, that no expanding interstellar civilization has occurred.
Sure, but we are uncertain about everything, including what the niches for postbiological civs are. Physics suggest that computation is ultimately primarily entropy/temperature limited (rather than energy limited), and thus the niches for advanced civs could be in the cold dark interstellar material (which we know is more plentiful than the hot bright stuff). We don’t see stellavores for the same reasons that humanity isn’t interested in colonizing deep sea thermal vents (or underwater habitats in general).
So the stars could be the past—the ancient history of life, not it’s far future.
In the cold dark models, the galaxy is already colonized, and the evidence is perhaps already in front of us . ..
In this model the physical form of alien civs is likely to be in compact cold objects that are beyond current tech to image directly. The most likely chance to see them is during construction, which would be more energetically expensive and thus could take place near a star—perhaps the WTF star is a civ in transition to elder status.
The WOW signal was an alien radar ping, similar to what aliens would see from the radar pings that we use for planetary radar imaging with arecibo.
Aliens most likely have already visited sol at various points, but for them it is something like the ocean floor is to us—something of minor interest for scientific study.
On that note, it’s starting to look like the emDrive and kin are real. If that is true, it is additional evidence for aliens. Why? Because the earliest and most credible modern UFO reports—such as the Kenneth Arnold sighting—are most consistent with craft that is vaguely areodynamic but does not rely on areodynamic principles for thrust. The arnold report contains rather specific details of the craft’s speed and acceleration, lack of contrail, etc. As we know more about future engineering capabilities for atmospheric craft, that report could become rather strong evidence indeed (or not).
In the transcendent models, civs use all available resources to expand inward, because that allows for continued exponential growth. Transcendent civs don’t expand outward because it is always an exceptionally poor use of resources. Notice that that is true today—we could launch an interstellar colony ship for some X trillions, but spending those resources on Moore’ls Law is vastly preferred. In the transcendent model, this just continues to be true indefinitely—likely ending in hard singularities, strange machines that create new universes, etc.
Finally, the distribution over various alien civs are not really statistically independent, even if they developed independently. Our uncertainty is at the model level in terms of how physics and future engineering works. The particular instance variables of each civ don’t matter so much. So if the cold dark model is correct, all civs look like that, if the transcendent model is correct all civs look like that, etc.
The hypothesis that dark matter could be comprised of cold clumps of matter has been considered (these objects are called MACHOs) and as far as I know this hypothesis has been largely ruled out as they have properties that aren’t consistent with how dark matter actually behaves.
I also think you’re making an unfounded assumption here—that advanced civilizations could be stealthy. But what we know suggests that there ain’t no stealth in space. There are a number of difficulties in keeping large energy-consuming objects cold, and even if you succeeded in keeping the brains themselves cold, the associated support equipment and fusion reactors that you mention would be pretty hot. And the process of constructing the brains would be very hot.
Unrelated. There is baryonic and non-baryonic dark matter. Most of the total dark matter is currently believed to be non-baryonic, but even leaving that aside the amount of baryonic dark matter is still significant—perhaps on par or greater than the baryonic visible matter. Most important of all is the light/dark ratio of heavier element baryonic matter and smaller planets/planetoids. There are some interesting new results suggesting most planets/planetoids are free floating rather than bound to stars (see links in my earlier article—“nomads of the galaxy” etc).
There is a limit to how big a giant computing device can get before gravitational heating makes the core unusable—the ideal archilect civ may be small, too small to detect directly. But perhaps they hitch rides orbiting larger objects.
Also, we don’t know enough about non-baryonic dark matter/energy to rule it out as having uses or a relation to elder civs (although it seems unlikely, but still—there are a number of oddities concerning the whole dark energy inflation model).
Well we are talking about hypothetical post-singularity civs . . ..
There doesn’t appear to be any intrinsic limit to computational energy efficiency with reversible computing, and practicality of advanced quantum computing appears to be proportional to how close one can get to absolute zero and how long one can maintain that for coherence.
So at the limits, computational civs approach CMB temperature and use negligible energy for computation. At some point it becomes worthwhile to spend some energy to move away from stars.
Any model makes some assumptions based on what aspects of engineering/physics we believe will still hold into the future. The article you linked makes rather huge assumptions—aliens civs need to travel around in ships, ships can only move by producing thrust, etc. Even then from what I understand detecting thrust is only possible at in-system distances, not light year distances.
The cold dark alien model i favor simply assumes advanced civs will approach physical limits.
The CMB temperature (2.7 K) is still very warm in relative terms and it’s hard to see how effective large-scale quantum computing could be done at that temperature (current crude quantum computers operate at millikelvin temperatures and still have only very miniscule levels of coherence). The only way to get around this is to either use refrigeration to cool down the system (leading to a very hot fusion reactor and refrigeration equipment) or make do with 2.7 K, which would probably lead to a lot of heat dissipation.
You would absorb a large amount of entropy from the CMB at this temperature (about 1000 terabytes per second per square meter); you’d need to compensate for this entropy to keep your reversible computer working.
The CMB is just microwave radiation right? So reflective shielding can block most of that. What are the late engineering limits for microwave reflective coatings? With superconducting surfaces, metamaterials, etc?
Some current telescopes cool down subcomponents to very low temperatures without requiring large fusion reactors.
If the physical limits of passive shielding are non-generous, this just changes the ideal designs to use more active cooling than they otherwise would and limit the ratio of quantum computing stuff to other stuff—presumably there is always some need for active cooling and that is part of the energy budget, but that budget can still be very small and the final device temperature could even be less than CMB.
I’m afraid it can’t. The ‘shielding’ itself would soon reach equilibrium with the CMB and begin emitting at 2.7 K. It makes no difference what it’s made of. You can’t keep an object cooler than the background temperature indefinitely without expending energy. If you could, you would violate conservation of energy.
And, again, the process of generating that energy would produce a lot of heat and preclude stealth.
But the gross mass of the telescope is never lower than (or even equal to) the background temperature. JWST, for instance, is designed for 50 K operating temperature (which emits radiation at about 100,000 times the background level according to the Stefan-Boltzmann law).
Again, this would just make the problem worse, as a decrease in entropy in one part of the system must be balanced by a larger increase in entropy elsewhere. I’m talking about the possibility of stealth here (while maintaining large-scale computation).
This is a non-obvious statement to me. It seems that a computation on the level you’re describing (much larger in scale than the combined brainpower of current human civilization by orders of magnitude) would require a large amount of mass and/or energy and would thus create a very visible heat signature. It would be great if you could offer some calculations to back up your claim.
Years ago I had the idea that advanced civilizations can radiate waste heat into black holes instead of interstellar space, which would efficiently achieve much lower temperatures and also avoid creating detectable radiation signatures. See http://www.weidai.com/black-holes.txt and my related LW post.
The recent news about KIC 8462852 immediately reminded me of your old txt file article. I’m really curious what you think about the recent information given how much you seem to have thought about advanced civs.
It’s an interesting idea.
Stable black holes seem difficult to create though—requires alot of mass. Could there be a shortcut?
EDIT: After updating through this long thread, I am now reasonably confident that the above statement is incorrect. Passive shielding in the form of ice can cool the earth against’s the sun’s irradiance to a temp lower than the black body temp, and there is nothing special about the CMB irradiance. See the math here at the end of the thread.
Sure—if it wasn’t actively cooled, but of course we are assuming active cooling. The less incoming radiation the system absorbs, the less excess heat it has to deal with.
Sure you need to expend energy, but obviously the albedo/reflectivity matters a great deal. Do you know what the physical limits for reflectivity are? For example—if the object’s surface can reflect all but 10^-10 of the incoming radiation, then the active cooling demands are reduced in proportion, correct?
I’m thinking just in terms of optimal computers, which seems to lead to systems that are decoupled from the external environment (except perhaps gravitationally), and thus become dark matter.
The limits of reversible computing have been discussed in the lit, don’t have time to review it here, but physics doesn’t appear to impose any hard limit on reversible efficiency. Information requires mass to represent it and energy to manipulate it, but that energy doesn’t necessarily need to be dissipated into heat. Only erasure requires dissipation. Erasure can be algorithmically avoided by recycling erased bits as noise fed into RNGs for sampling algorithms. The bitrate of incoming sensor observations must be matched by an outgoing dump, but that can be proportionally very small.
I think you’re still not ‘getting it’, so to speak. You’ve acknowledged that active cooling is required to keep your computronium brain working. This is another way of saying you expend energy to remove entropy from some part of the system (at the expense of a very large increase in entropy in another part of the system). Which is what I said in my previous reply. However you still seem to think that, given this consideration, stealth is possible.
By the way, the detection ranges given in that article are for current technology! Future technology will probably be much, much better. It’s physically possible, for instance, to build a radio telescope consisting of a flat square panel array of antennas one hundred thousand kilometers on a side. Such a telescope could detect things we can’t even imagine with current technology. It could resolve an ant crawing on the surface of pluto or provide very detailed surface maps of exoplanets. Unlike stealth, there is no physical limit that I can think of to how large you can build a telescope.
Not theoretically, no. However, at any temperature higher than 0 K, purely reversible computing is impossible. Unfortunately there is nowhere in the universe that is that cold, and again, maintaining this cold temperature requires a constant feed of energy. These considerations impose hard, nonzero limits on power consumption. Performing meaningful computations with arbitrarily small power consumption is impossible in our universe.
You’re repeatedly getting very basic facts about physics and computation wrong. I love talking about physics but I don’t have the time or energy to keep debating these very basic concepts, so this will probably be my last reply.
No—because you didn’t actually answer my question, and you are conflating the reversible computing issue with the stealth issue.
I asked:
The energy expended and entropy produced for cooling is proportional to the incoming radiation absorbed, correct? And this can be lowered arbitrarily with reflective shielding—or is that incorrect? Nothing whatsoever to do with stealth, the context of this discussion concerns only optimal computers.
Don’t understand this—the theory on rev computing says that energy expenditure is proportional to bit erasure, plus whatever implementation efficiency. The bit erasure cost varies with temperature sure, but you could still theoretically have a rev computing working at 100K.
You seem to be thinking that approaching zero energy production requires zero temperature—no. Low temperature reduces the cost of bit erasure, but bit erasure itself can also be reduced to arbitrarily low levels with algorithmic level recycling.
Which are?
Such as? Am I incorrect in the assumption that the cost of active cooling is proportional to the temperature or entropy to remove and thus the incoming radiation absorbed—and thus can be reduced arbitrarily with shielding?
External surface area of computer = A.
Background temperature = T ~ 2.7 K.
Stefan-Boltzmann constant: σ
Thermal power absorbed by system: P = σAT^4 (J/s)
Entropy absorbed by system: X = P / (T k_B log(2)) (bits/s)
Minimal amount of energy required to overcome this entropy: k_B T X * log(2) -- this happens to be equal to P.
Limit: External surface area of computer times σT^4.
As for active cooling, I think the burden of proof here is up to you to present a viable system and the associated calculations. How much energy does it take to keep a e.g. sphere of certain radius cold?
The thermal power you quoted is the perfect black body approximation. For a grey body, the thermal power is:
P = eoAT^4
where e is the material specific emissivity coefficient , and the same rule holds for absorption.
You seem to be implying that for any materials, there is a fundamental physical law which requires that absorption and emission efficiency is the same—so that a reflector which absorbs only e% of the incoming radiation is also only e% efficient at cooling itself through thermal emission.
Fine—even assuming that is the case, there doesn’t seem to be any hard limit to reflective efficiency. A hypothetical perfect whitebody which reflects all radiation perfectly would have no need of cooling by thermal emission—you construct the object (somewhere in deep space away from stars) and cool it to epsilon above absolute zero, and then it will remain that cold for the duration of the universe.
There is also current ongoing research into zero-index materials that may exhibit ‘super-reflection’. 1
If we can build super-conductors, then super-reflectors should be possible for advanced civs—a super conductor achieves a state of perfect thermal decoupling for electron interactions, suggesting that exotic material states could achieve perfect thermal decoupling for photon interactions.
So the true physical limit is for a perfect white body with reflectivity 1. The thermal power and entropy absorbed is zero, no active cooling required.
Furthermore, it is not clear at all that reflection efficiency must always equal emission efficiency.
Wikipedia’s article on the Stefan-Boltzmann Law hints at this:
What do you make of that?
Also—I can think of a large number of apparent counter-examples to the rule that reflection and emission efficiency must be tied.
How do we explain greenhouse warming of the earth, snowball earth, etc? The temperature of the earth appears to mainly depend on it’s albedo, and the fraction of incoming light reflected doesn’t appear to be intrinsically related to the fraction of outgoing light, with separate mechanisms affecting each.
Or just consider a one-way mirror: it reflects light in one direction, but is transparent in the other. If you surround an object in a one-way mirror (at CMB infrared/microwave wavelengths) - wouldn’t it stay very cold as it can emit infrared but is protected from absorbing infrared? Or is this destined to fail for some reason?
I find nothing in the physics you have brought up to rule out devices with long term temperatures much lower than 2.7K—even without active cooling. Systems can be out of equilibrium for extremely long periods of time.
Again, you’re getting the fundamental and basic physics wrong. You’ve also evaded my question.
There is no such thing as a perfect whitebody. It is impossible. All those examples you mention are for narrow-band applications. Thermal radiation is wideband and occurs over the entire electromagnetic spectrum.
The piece in the wikipedia article links to papers such as http://arxiv.org/pdf/1109.5444.pdf in which thermal radiation (and absorption) are increased, not decreased!
Greenhouse warming of the Earth is an entirely different issue and I don’t see how it’s related. The Earth’s surface is fairly cold in comparison to the Sun’s.
One-way mirrors do not exist. http://web.archive.org/web/20050313084618/http://cu.imt.net/~jimloy/physics/mirror0.htm What are typically called ‘one-way mirrors’ are really just ordinary two-way partially-reflective mirrors connecting two rooms where one room is significantly dimmed compared to the other.
Well, firstly, you have to cool it down to below 2.7K in the first place. That most certainly requires ‘active cooling’. Then you can either let it slowly equilibrate or keep it actively cold. But then you have to consider the carnot efficiency of the cooling system (which dictates energy consumption goes up as e/T_c, where T_c is the temperature of the computer and e is the energy dissipated by the computer). So you have to consider precisely how much energy the computer is going to use at a certain temperature and how much energy it will take to maintain it at that temperature.
EDIT: You’ve also mentioned in that thread you linked that “Assuming large scale quantum computing is possible, then the ultimate computer is thus a reversible massively entangled quantum device operating at absolute zero.” Well, such a computer would not only be fragile, as you said, but it would also be impossible in the strong sense. It is impossible to reach absolute zero because doing so would require an infinite amount of energy: http://io9.com/5889074/why-cant-we-get-down-to-absolute-zero . For the exact same reason, it is impossible to construct a computer with full control over all the atoms. Every computer is going to have some level of noise and eventual decay.
Show instead of tell. I didn’t yet answer your question about the initial energy cost of cooling the sphere because it’s part of the initial construction cost and you haven’t yet answered my questions yet about reflectivity vs emisison and how it relates to temperature.
Says what law—and more importantly—what is the exact limit then? Perfect super-conductivity may be impossible but there doesn’t appear to be an intrinsic limit to how close one can get, and the same appears to apply for super-reflection. This whole discussion revolves around modeling technologies approaching said limits.
This helps my case—the incoming radiation is narrow-band microwave from the CMB. The outgoing radiation can be across the spectrum.
If the ‘law’ can be broken by materials which emit more than the law allows, this also suggests the ‘law’ can be broken in other ways as in super-reflectors.1
Ok.
If the earth’s equilibrium temperature varies based on the surface albedo, this shows that reflectivity does matter and suggests a hypothetical super-reflector shielding for the CMB microwave could lead to lower than CMB temperatures. (because snow covering of the earth leads to lower equilibrium temperatures than a black-body at the same distance from the sun.)
Do you? I’m not clear on that—you haven’t answered the earth counter example, which seems to show that even without active cooling, all it takes is albedo/reflectivity for an object’s equilibrium temperature to be lower than that of a black body in the same radiation environment. Is there something special about low temps like 2.7k?
Apparently coherence in current quantum computers requires millikelvin temperatures, which is why I’m focusing on the limits approaching 0K. And from what I understand this is fundamental—as the limits of computing involve very long large coherent states only possible at temperatures approaching 0.
If we weren’t considering quantum computing, then sure I don’t see any point to active cooling below 2.7K. The the energy cost of bit erasures is ~CTc for some constant C, but the cooling cost goes as e/Tc. So this effectively cancels out—you don’t get any net energy efficiency gain for cooling below the background temperature. (of course access to black holes much colder than the CMB changes that)
Yes—but again we are discussing limits analysis where said quantities approach zero, or infinity or whatever.
You can trivially prove this for yourself. High-energy gamma rays cannot be completely reflected by matter. All thermal radiation contains some high-energy gamma rays. Thus no material can perfectly reflect thermal radiation. QED.
No it’s not. CMB radiation spans the entire EM spectrum. Thermal radiation is almost the exact opposite of narrow-band radiation.
It’s not really broken though. It’s just that radiation in these materials happens through mechanisms beyond conventional blackbody radiation. A common LED emits radiation far in excess of its thermal radiation. This doesn’t mean that Stefan-Boltzmann is ‘broken’, it just means that an extra emission mechanism is working. A mechanism that requires free energy to run (unlike normal thermal radiation which requires no free energy). And sure enough, if you read that paper the extra mechanism requires extra free energy.
But you can’t use an extra emission mechanism to reduce the emitted raditation.
You keep making this same mistake. Thermal equilibrium temperature does not depend on surface reflectivity. https://www.researchgate.net/post/Is_it_possible_to_distinguish_thermal_bodies_in_equilibrium/1
This is a very basic physics error.
It makes no difference what type of computing you’re considering. I suggest reading http://arxiv.org/pdf/quant-ph/9908043.pdf
Specifically, the limiting factor is not temperature at all but error rate of your computer hardware, quantum or not. The ultimate limit to efficiency is set by the error rate, not the temperature at which you can cool the system to.
For any system, even exotic? By what law? A simple google search seems to disagree—gamma rays are reflected today, in practice, (albeit with difficulty and inefficiently) by multilayer reflectors.
The vast majority of the energy peaks in microwave frequencies, but fine yes there is always some emission in higher frequencies—practical shielding would be complex and multilayer.
You keep bringing this up, but you can’t explain how it applies to some basic examples such as the earth. How can you explain the fact that the temperature of planets such as earth, venus varies greatly and depends mostly on their albedo? Is it because the system is not in equilibrium? Then who cares about equilibrium? It almost never applies.
If the earth/sun system is not in equilibrium, then my hypothetical reflective object somewhere in deep space receiving radiation only from the CMB is certainly not in equilibrium either.
And finally the universe itself is expanding and is never in equilibrium—the CMB temperature is actually decaying to zero over time.
Until I see a good explanation of planetary albedo and temperature, I can’t take your claim of “basic physics mistake” seriously.
Read that of course, and I’d recommend some of Mike Frank’s stuff over it.1 Obviously the energy cost of bit erasure is the same for all types of computing. Quantum computing is different only in having much lower error/noise/temp tolerances due to decoherence issues.
These are directly linked.
Heat is just thermal noise. And noise and errors are fundamentally the same—uncertainty over states that can explode unless corrected. The error rate for the most advanced computers is absolutely limited by thermal noise (and quantum noise).
This is trivially obvious at extremes—ie the error rate of a computer at 10000K is 100% for most materials. The lowest error rates are only achievable by exotic matter configurations at very low temperatures.
The idealized perfect computer is one with zero entropy—ie ever quantum state stores meaningful information, and every transition at every time step is a planned computation.
Looking at it another way, using devices and transitions larger than the absolute physical limits is just an easy way to do error correction to handle thermal noise.
I still can’t understand why you think the Earth system is representative here.… are you asking why the Earth isn’t the same temperature as the Sun? Or the same temperature as the background of space? Because if you remove any one, it would equilibrate with the other. But you’re proposing to put your system in deep space where there is only the background. If you did that to Earth, you’d find it would very rapidly equilibrate to close to 2.7 K, and the final temperature is irrespective of surface albedo.
Albedo doesn’t have any relationship with final temperature. Only speed at which equilibrium is reached.
Again, I don’t feel like I have to ‘explain’ anything here… perhaps you could explain, in clearer terms, why you think it bears any relationship to the system we are discussing?
It’s great that you’ve read those, unfortunately it seems you haven’t understood them at all.
Not in the way you probably think. Error rate depends on hardware design as well as temperature. You’re confusing a set of concepts here. As errors are generated in the computation, the entropy (as measured internally) will increase, and thus the heat level will increase. If this is what you are saying, you are correct. But the rate of generation of these errors (bits/s) is not the same as the instantaneous system entropy (bits) - they’re not even the same unit! You could have a quantum computer at infinitesimally low temperature and it would still probably generate errors and produce heat.
This is really just another way of saying that your computer is not 100% reversible (isentropic). This is because of inevitable uncertainties in construction (is the manufacturing process that created the computer itself a perfectly error-free computer? If so, how was the first perfectly error-free computer constructed?), uncertainties in the physics of operation, and inevitable interaction with the outside world. If you claim you can create a perfectly isentropic computer, then the burden of proof is on you to demonstrate such a system. You can’t expect me to take it on faith that you can build a perfectly reversible computer!
I honestly can’t understand how you can’t understand it. :)
1. Take a spherical body and place it in a large completely empty universe. The body receives zero incoming radiation, but it emits thermal radiation until it cools to zero or something close to that—agreed? (quantum noise fluctuations or virtual particles perhaps impose some small nonzero temp, not sure) Albedo/reflectivity doesn’t matter because there is no incoming radiation. Materials with higher emissivity will cool to zero faster.
2. Spherical body in an empty universe that contains a single directional light source that is very far away. The light source is not effected by the body in any significant way and does not prevent the body from emitting radiation. The source and the body will never reach equilibrium in the timescales we care about. The body absorbs radiation according to it’s albedo and the incoming flux. The body emits radiation according to temperature and emissivity. It will evolve to a local equilibrium temperature that depends on these parameters.
3. The earth sun system—it is effectively equivalent to 2. The sun is not infinitely far away, but as far as the earth’s temp is concerned the sun is just a photon source—the earth has no effect on the sun’s temp and the objects are not in equilibrium. This situation is equivalent to 2.
We have hard data for situation 3 which shows that the balance between incoming radiation absorbed vs outgoing radiation emitted can differ for a complex composite object based on alebdo/reflectivity vs emissitivity. The end result is the object’s local equilibrium temperature depends on these material parameters and can differ significantly from that of the black body temperature for the same input irradiance conditions.
4. Object in deep space. It receives incoming radiation from the CMB—which is just an infinite omnidirectional light source like 3. The directionality shouldn’t change anything, the energy spectrum shouldn’t change anything, so it’s equivalent to 3 and 2. The object’s resting temperature can be lower than the CMB blackbody ‘temperature’ (which after all isn’t the temp of an actual object, it’s tautologically just the temperature of a simple blackbody absorbing/emitting the CMB).
So what am I missing here? - seriously—still waiting to see how #3 could possibly differ.
Whatever principle it is that allows the earth’s resting temp to vary based on surface albedo can be exploited to passively cool the earth, and thus can be exploited to passively cool other objects.
Google is now good enough that it gets some useful hits for “temperature lower than the CMB”. In particular on this thread from researchgate I found some useful info. Most of the discussion is preoccupied with negative temps, but one or two of the replies agree with my interpretation and they are unchallenged:
Rüdiger Mitdank · Humboldt-Universität zu Berlin:
The cosmic Background Radiation is in a very good approximation a black Body Radiation. Every Body which is in a thermal Equilibrium with this Radiation source has this temperature. If you have another Radiation sources, usually hotter bodies like suns, the temperature increases. If due to the surface reflectivity the Absorption is low, the Body temperature approximates to a lower value, that Emission and Absorption are equal. Therefore it might be possible, that bodies consisting of ice or snow and having a clean surface, have a temperature below cosmic Background temperature. This occurs only far away from any other Radiation source. I would look for comets out of our sun system.
The final temp for the earth in the (earth, sun, background) ‘equilibrium’ does depend on the surface albedo.
I recommended Frank’s work because it has the most clear unifying explanations of computational entropy/information. A deterministic computer is just an approximation—real systems are probabilistic (and quantum) and eventually we will move to those models of computation. The total entropy is always conserved, with some of entropy budget being the usable computational bits(qbits) and some being unknown/error bits such as thermal noise (but this generalization can also cover quantum noise). The ‘erasure’ of a bit really is just intentional randomization.
The idea of a hard error comes from the deterministic approximation, which assumes that the state of every bit is exactly known. In a prob circuit, we have instead a distribution over bit states, and circuit ops transform these distributions.
Uncertainty in the construction can be modeled as a learning/inference problem. Instead of simple deterministic circuits, think of learning probabilistic circuits (there are no ‘errors’ so to speak, just distributions and various types of uncertainty). As inference/learning reduces uncertainty over variables of interest, reversible learning must generate an equivalent amount of final excess noise/garbage bits. Noise bits in excess of the internal desired noise bit reserve would need to be expelled - this is the more sophisticated form of cooling.
Each device has an IO stream that is exactly bit conserved and thus reversible from it’s perspective. Same principle that applies to each local circuit element applies to each device.
The final limitation is incoming entropy—noise from the outside world. This inflow must be balanced by a matching bit outflow from the internal noise bit reserves. This minimal noise flow (temperature) places ultimate limits on the computational capability of the system in terms of SNR and thus (analog/probabilistic) bit ops.
For the moment I’m just going to ignore everything else in this debate (I have other time/energy committments...) and just focus on this particular question, since it’s one of the most fundamental questions we disagree on.
You are wrong, plain and simple. Rüdiger Mitdank is also wrong, despite his qualifications (I have equivalent qualifications, for that matter). Either that or he has failed to clearly express what he means.
If it were true that you could maintain an object colder than the background without consuming energy (just by altering surface absorption!), then you could have a free energy device. Just construct a heat engine with one end touching the object and the other end being a large black radiator.
Yea—several examples from wikipedia for the temperature of a planet indicate that albedo and emissivity can differ (it’s implied on this page, directly stated on this next page).
Here under effective temperature they have a model for a planet’s surface temperature where the emissivity is 1 but the albedo can be greater than 0.
Notice that if you plug in an albedo of 1 into that equation, you get a surface temperature of 0K!
The generalized stefan boltzmann law is thus the local equilibrium where irradiance/power absorbed equals irradiance/power emitted:
J_a = J_e
J_a = J_in*(1 - a)
J_e = eoT^4
T = (J_in (1-a) / (eo)) ^-4
J_in is the incoming irradiance from the light source, a is the material albedo, e is the material emissivity, o is SB const, T is temp.
This math comes directly from the wikpedia page, I’ve just converted from power units to irradiance. replacing the star’s irradiance term of L/(16 PI D^2) with a constant for an omni light source (CMB).
On retrospect, one way I could see this being wrong is if the albedo and emissivity are always required to be the same for a particular wavelength. In the earth example the albedo of relevance is for high energy photons from the sun whereas the relevant emissivity is lower energy infrared. Is that your explanation?
Hmm perhaps, but I don’t see how that’s a ‘free’ energy device.
The ‘background’ is a virtual/hypothetical object anyway—the CMB actually is just a flux of photons. The concept of temperature for photons and the CMB is contrived—defined tautologically based on an ideal black body emitter. The actual ‘background temperature’ for a complex greybody in the CMB depends on albedo vs emissivity—as shown by the math from wikipedia.
One can construct a heat engine to extract solar energy using a reflective high albedo (low temp resevoir) object and a low albedo black object. Clearly this energy is not free, it comes from the sun. There is no fundamental difference between photons from the sun and photons from the CMB, correct?
So in theory the same principle should apply, unless there is some QM limitation at low temps like 2.7K. Another way you could be correct is if the low CMB temp is somehow ‘special’ in a QM sense. I suggested that earlier but you didn’t bite. For example, if the CMB represents some minimal lower barrier for emittable photon energy, then the math model I quoted from wikipedia then breaks down at these low temps.
But barring some QM exception like that, the CMB is just like the sun—a source of photons.
I’ve never seen someone so confused about the basic physics.
Let’s untangle these concepts.
Effective temperature is not actual temperature. It’s merely the temperature of a blackbody with the same emitted radiation power. As such, it depends on two assumptions:
The emitted power is thermal in origin,
The emission spectrum is the ideal blackbody spectrum.
Of course if these assumptions aren’t true then the temperature estimate is going to be wrong. Going back to my LED example, a glowing LED might have an ‘effective temperature’ of thousands of degrees K. This doesn’t mean anything at all.
The source of your confusion could be that emitted and received radiation sometimes have different spectra. This is indeed true. It’s true of the Earth, for instance. But at equilibrium, absorption and emission are exactly equal at all wavelengths. Please read this: https://en.wikipedia.org/wiki/Kirchhoff%27s_law_of_thermal_radiation
Irrelevant, as I said. (The concept of albedo isn’t very useful for studying thermal equilibrium, I suggest you ignore it)
Yes, this is the definition of being at the same temperature, if you didn’t know. (Assuming, of course, that the radiation is thermal in origin and radiation is the only heat transfer process at work, which it is in our example). If you disagree with this then you are simply wrong by definition and there is nothing more to say.
You seem to think that temperature is some concept that exists outside of thermal equilibrium. This is a very common mistake. Temperature is only defined for a system at thermal equilibrium, and when two objects are in thermal equilibrium with one another, they are by definition at the same temperature. It does not matter at all how fast their atoms are moving or what they are made of.
No it’s not. It’s based on analysis of the spectrum, which is almost perfectly the spectrum of an ideal black body.
The physics would be exactly the same if it were an actual sheet of black material at 2.7 K covering the universe.
That was indeed a source of initial confusion as I stated above, and I read Kirchnoff’s Law. I said:
However this still doesn’t explain how passive temps lower than 2.7K are impossible. Passive albedo cooling works for the earth because snow/ice is highly reflective (inefficient absorber/emitter) at the higher frequencies where most of the sun’s energy is concentrated, and yet it is still an efficient absorber/emitter at the lower infrared frequencies. - correct?
Now—what prevents the same principle for operating at lower temps? If ice can reflect efficiently at 500 nm and emit efficiently at 10um, why can’t some hypothetical object reflect efficiently at ~cm range CMB microwave and emit efficiently at even lower frequencies?
You said: “But at equilibrium, absorption and emission are exactly equal at all wavelengths.”
But clearly, this isn’t the case for the sun earth system—and the law according to wikipedia is wavelength dependent. So I don’t really understand your sentence.
You are probably going to say … 2nd law of thermodynamics, but sorry even assuming that said empirical law is actually axiomatically fundamental, I don’t see how it automatically rules out these scenarios.
This isn’t an explanation, you still haven’t explained what is different in my examples.
Sure—but notice that it’s infinitely far away, so the concept of equilibrium goes out the window.
Also—aren’t black holes an exception? An object using a black hole as a heat sink could presumably achieve temps lower than 2.7K.
I never said it can’t. Such a material is definitely possible. It just couldn’t passively reach lower temperature than the background. Assuming it’s far out in space, as it cooled down to 2.7 K, it would eventually reach the limit where its absorption and emission at all frequencies equalled the background (due to Kirchhoff’s law), and that’s where the temperature would stay. If it started out at a lower temperature (due to being cooled beforehand) it would absorb thermal radiation until its temperature equalled the background (again, this is directly due to how we define temperature), and again that’s where it would stay.
If you have a problem with that, take it up with Kirchhoff, not me :)
The Earth system is completely different here because neither the Earth is in thermal equilibrium with the Sun, nor is the Earth in thermal equilibrium with the background, nor is the Sun in thermal equilibrium with the background. There is a net transfer of thermal energy occurring from the sun to the earth to the background (yes the sun is heating up the background—but not by much though). And ‘net transfer of energy’ means no equilibrium. Sources that use ‘thermal equilibrium’ for the relationship between the Sun and the Earth are using the term loosely and incorrectly.
The situation here is far from equilibrium because of the massive amounts of energy that the sun is putting out. This is the very opposite of ‘passive’ operation.
EDIT: fixed equation
The ‘background ‘is just an incoming flux of photons. If you insist that this photon flux has a temperature, then it is obviously true that an object can have a lower temperature than this background flux, because an icy earth can have a lower temp than it’s background flux. As you yourself said earlier, temperature is only defined in terms of some equilibrium condition, and based on the equations that define temperature (below), the grey body ‘temperature’ for an irradiance distribution can differ from the black body temperature for the same distribution.
The math allows objects to shield against irradiance and achieve lower temps.
The general multispectral thermal emission of a grey body is just the black body emission function (planck’s law) scaled by the wavelength/frequency dependent emissivity function for the material (as this is how emissivity is defined).
G_{lambda,T}=epsilon_{lambda}B_{lambda,T}.
The outgoing thermal emission power for a grey body is thus:
}).The object’s temperature is stable when the net energy emitted equals the net energy absorbed (per unit time), where the net absorption is just the irradiance scaled by the emissivity function.
So we have:
}%20}%20d\lambda%20=%20\int%20{\epsilon_{\lambda}%20E_{\lambda}%20}%20%20d\lambda).Where E_{lambda}. is the incoming irradiance distribution as a function of wavelength, and the rest should be self-explanatory.
I don’t have a math package handy to solve this for T. Nonetheless, given the two inputs : the material’s emissivity and the incoming irradiance (both functions of wavelength), a simple numerical integration and optimization can find T solutions.
The only inputs to this math are the incoming irradiance distribution and the material’s emissivity/absorptivity distribution. There is no input labeled ‘background temperature’.
Assuming I got it right, this should model/predict the temps for earth with different ice/albedo/greenhouse situations (ignoring internal heating sources) - and thus obviously also should allow shielding against the background! All it takes is a material function which falls off heavily at frequencies before the mean of the input irradiance distribution.
This math shows that a grey body in space can have an equilibrium temperature less than a black body.
A hypothetical ideal low temp whitebody would have an e function that is close to zero across the CMB frequency range, but is close to 1 for frequencies below the CMB frequency range. For current shielding materials the temp would at best only a little lower than CMB black body temp due to the 4th root, but still—for some hypothetical material the temp could theoretically approach zero as emissivity across CMB range approaches zero. Put another way, the CMB doesn’t truly have a temperature of 2.7k—that is just the black body approx temp of the CMB.
If your position is correct, something must be wrong with this math—some extra correction is required—what is it?
No—that isn’t what the math says—according to the functions above which define temperature for this situation. As I pointed out earlier 2.7K is just the black body approx temp of the CMB (defined as the temp of a black body at equilibrium with the CMB!), and the grey body temp can be lower. Kirchhoff’s law just says that the material absorptivity at each wavelength equals the emissivity at that wavelength. The wikipedia page even has an example with white paint analogous to the icy earth example.
None of this word level logic actually shows up in the math. The CMB is just an arbitrary set of photons. Translating the actual math to your word logic, the CMB set can be split as desired into subsets based on frequency such that a material could shield against the higher frequencies and emit lower frequencies—as indicated by the math. There is thus a transfer between one subset of the CMB, the object, and another CMB subset.
Another possible solution in your twisted word logic: there is always some hypothetical surface at zero that is infinitely far away. Objects can radiate heat towards this surface—and since physics is purely local, the code/math for local photon interactions can’t possibly ‘know’ whether or not said surface actually exists. Or replace surface with vaccuum.
For your position to be correct, there must be something extra in the math not yet considered—such as some QM limitation on low emission energies.
You’re abusing the math here. You’ve written down the expanded form of the Stefan-Boltzmann equation, which assumes a very specific relationship between temperature and emitted spectrum (which you say yourself). Then you write the temperature in terms of everything else in the equation, and assume a completely different emitted spectrum that invalidates the original equation that you derived T from in the first place.
What you’re doing isn’t math, it’s just meaningless symbolic manipulation, and it has no relationship to the actual physics.
Of course it does—this is a very very common and useful concept in physics—and when you say it doesn’t this betrays lack of familiarity with physics. Photon gas most certainly has temperature, in exactly the same way as a gas of anything else has temperature. Not only that, but it also has pressure and entropy.
In fact, in some situations, like an exploding hydrogen bomb, a photon gas has considerable temperature and pressure, far in excess of say the temperature in the center of the sun or the pressure in the center of the Earth.
https://en.wikipedia.org/wiki/Photon_gas
You say yourself that you assume ‘local equilibrium with it’s incoming irradiance’. Firstly, you’re using the word ‘local’ incorrectly. I assume you mean ‘equilibrium at each wavelength’. If so, this is the very definition of being at the same temperature as the incoming irradiance (assuming everything here is thermal in nature) and, again, there is nothing more to say. http://physics.info/temperature/
I’m sorry that you’re so insistent on your incorrect viewpoint that you’re not even willing to listen to the obvious facts, which are really very simple facts.
EDIT: I fixed the equations above, replaced with the correct emission function for a grey body.
Yes I see—the oT^4 term on the left needs to be replaced with the black body emission function of wavelength and temperature—which I gather is just Planck’s Law. Still, I don’t (yet) see how that could change the general conclusion.
Here is the corrected equation:
}%20}%20d\lambda%20=%20\int%20{\epsilon_{\lambda}%20E_{\lambda}%20}%20%20d\lambda).Where E_{lambda}. is the incoming irradiance distribution as a function of wavelength, and the rest should be self-explanatory. For any incoming irradiance spectrum, the steady state temperature will depend on the grey body emissivity function and in general will differ from that of a black body.
I’m aware that the concept of temperature is applied to photons—but given that they do not interact this is something very different than temperature for colliding particles. The temperature and pressure in the examples such as the hydrogen bomb require interactions through intermediaries.
The definition of temperature for photon gas in the very page you linked involves a black body model due to the lack of photon-photon interactions—supporting my point about photon temps such as the CMB being defined in relation to the black body approx.
I meant local in the physical geometric sense—as in we are modelling only a local object and the space around it over small smallish timescale. The meaning of local equilibrium should thus be clear—the situation where net energy emitted equals net energy absorbed. This is the same setup as the examples from wikipedia.
Actually I’ve updated numerous times during this conversation—mostly from reading the relevant physics. I’ve also updated slightly on answers from physicists which reach my same conclusion.
I’ve provided the radiosity equations for a grey body in outer space where the temperature is driven only by the balance between thermal emission and incoming irradiance. There are no feedback effects between the emission and the irradiance, as the latter is fixed—and thus there is no thermodynamic equilibrium in the Kirchoff sense. If you still believe that you are correct, you should be able to show how that math is wrong and what the correct math is.
This grey body radiosity function should be able to model the temp of say an icy earth, and can show how that temp changes as the object is moved away from the sun such that the irradiance shifts from the sun’s BB spectrum to that of the CMB.
We know for a fact that real grey body objects can have local temps lower than the black body temp for the irradiance of an object near earth. The burden of proof is now on you to show how just changing the irradiance spectrum can somehow lead to a situation where all possible grey body materials have the same steady state temperature.
You presumably believe such math exists and that it will show the temp has a floor near 2.7K for any possible material emissivity function, but I don’t see how that could possibly work.
No. In retrospect I should not have used the word ‘equilibrium’.
As you yourself said, the earth is not in thermodynamic equilibrium with the sun, and this is your explanation as to why shielding works for the earth.
Replace the sun with a distant but focused light source, such as a large ongoing explosion. The situation is the same. The earth is never in equilibrium with the explosion that generated the photons.
The CMB is just the remnant of a long gone explosion. The conditions of thermodynamic equilibrium do not apply.
If you are correct then you should be able to show the math. Provide an equation which predicts the temp of an object in space only as a function of the incoming (spectral) irradiance and that object’s (spectral) emissivity.