You bring up some interesting points. I do not know whether minds could be made fully reversible in practice (obviously it’s possible in principle, since physics is reversible). The question, however, is not whether negentropy use can be lowered but whether it can be lowered to the point that a different resource, one which does not follow the M^2 power law, is the limiting one. If negentropy use can be lowered, what is the new limiting factor?
For example, you mentioned that many technologies require low temperatures. However, in the absence of perfect shielding against the CMB, this requires a cooling system, which is the same thing as an entropy absorber. The limiting resource in this case is still entropy.
You did not respond the my statement that the posterior probability of unknown physics that lets Moore’s law continue is low. Does this mean you agree? If not, where is the flaw in my argument?
If negentropy use can be lowered, what is the new limiting factor?
I imagine there will always be limiting factors, but they change with knowledge and technology.
I’m fairly sure that entropy can be recycled/managed well enough that heat/entropy issues will not be end limiters. In fact you could probably take reversible computing and entropy recycling to an extreme and make a computer that actually emitted negative net heat—absorbed entropy from the environment. I’m not sure that future hyperintelligences will necessarily have any need for the cold vacuum.
In fact, ‘entropy’ comes in many different forms. Cosmic rays are particularly undesirable forms of entropy, micrometeorites more so, and then large asteroids and supernovas are just extremums on this same scale. There is always something. A planetary atmosphere and crust provides some nice free armor.
But anyway, I digress. I’m not even absolutely certain there will always be limiting factors, but I’d bet that way. I’d bet that in the long term rare materials are a limiting factor, energy cost is still a limiting factor—but mainly just in terms of energy costs of construction rather than operation, and isolation/cooling/protection is something of a limiting factor, but these may be looking at the problem in the wrong light.
Bigger limiting factors for future hyper-intelligences may be completely non-material—such as proximity to exiting knowledge/computational clusters, and ultimately—novelty (new information).
For example, you could compute a googleplex per second and still be the dumbest hyperintelligence on the block if you are stuck with only human sensory capacities and a slow, high latency connection to other hyperintelligences and knowledge sources.
You did not respond the my statement that the posterior probability of unknown physics that lets Moore’s law continue is low. Does this mean you agree? If not, where is the flaw in my argument?
I’ve thought a little more on how to assign a likelihood to known physics (bayesian evidence and a universal prior) and it led me to the inescapable conclusion that we are still a ways away from final physics. In fact, in the process I’ve been reading up more on QM and it led me to realize that whole tracts of it are .. on the wrong track.
The universal prior as applied to physics is a whole topic in of itself, but it is the best guiding principle as to what ultimate final physics will allow. Creation of baby universes is dependent on GR and a prediction of loop quantum gravity in particular, I haven’t gotten to those maths yet. A more basic first question might be something like—which is more a prior likely—analog or digital and by how much? I’m betting digital, but if analog is not ruled out by the UP it could allow for unlimited local computation in principle, as one example.
I’m fairly sure that entropy can be recycled/managed well enough that heat/entropy issues will not be end limiters. In fact you could probably take reversible computing and entropy recycling to an extreme and make a computer that actually emitted negative net heat—absorbed entropy from the environment.
That violates the second law of thermodynamics unless you discover an infinite heat sink, which requires a specific type of new physics.
But anyway, I digress. I’m not even absolutely certain there will always be limiting factors, but I’d bet that way. I’d bet that in the long term rare materials are a limiting factor, energy cost is still a limiting factor—but mainly just in terms of energy costs of construction rather than operation, and isolation/cooling/protection is something of a limiting factor, but these may be looking at the problem in the wrong light.
Bigger limiting factors for future hyper-intelligences may be completely non-material—such as proximity to exiting knowledge/computational clusters, and ultimately—novelty (new information).
This all depends on what is being limited by these factors, which is your values. If you value sentient life, you need computing power. If you value novelty and learning, you also need computing power, but there might be diminishing returns (of course, it is not inconsistent to value sentience with diminishing returns, though most humans who do are inconstant).
I’ve thought a little more on how to assign a likelihood to known physics (bayesian evidence and a universal prior) and it led me to the inescapable conclusion that we are still a ways away from final physics. In fact, in the process I’ve been reading up more on QM and it led me to realize that whole tracts of it are .. on the wrong track.
I’m skeptical of this. Can you show your work? I’m particularly doubtful of your opinions on QM, unless they’re based on some interesting point about induction, in which case I’m only as doubtful of that as I am of the rest of this paragraph.
Creation of baby universes is . . . a prediction of loop quantum gravity in particular, I haven’t gotten to those maths yet.
No, the only thing baby universes and LQG have in common is that Lee Smolin studies them. He hypothesized baby universes not based on LQG, but because they allow a form of natural selection that has a chance of predicting life-filled universes without having to think about anthropic considerations. This seems like a horribly confused reason. The theory has no evidence in its favour, so it probability is not higher than its prior. In fact, according to Smolin’s Wikipedia page, it has been falsified by a discovery that the mass of the strange quark is not tuned for optimal black hole production.
A more basic first question might be something like—which is more a prior likely—analog or digital and by how much? I’m betting digital, but if analog is not ruled out by the UP it could allow for unlimited local computation in principle, as one example.
If a prior prohibits an analog universe, than it is a suboptimal prior.
You bring up some interesting points. I do not know whether minds could be made fully reversible in practice (obviously it’s possible in principle, since physics is reversible). The question, however, is not whether negentropy use can be lowered but whether it can be lowered to the point that a different resource, one which does not follow the M^2 power law, is the limiting one. If negentropy use can be lowered, what is the new limiting factor?
For example, you mentioned that many technologies require low temperatures. However, in the absence of perfect shielding against the CMB, this requires a cooling system, which is the same thing as an entropy absorber. The limiting resource in this case is still entropy.
You did not respond the my statement that the posterior probability of unknown physics that lets Moore’s law continue is low. Does this mean you agree? If not, where is the flaw in my argument?
I imagine there will always be limiting factors, but they change with knowledge and technology.
I’m fairly sure that entropy can be recycled/managed well enough that heat/entropy issues will not be end limiters. In fact you could probably take reversible computing and entropy recycling to an extreme and make a computer that actually emitted negative net heat—absorbed entropy from the environment. I’m not sure that future hyperintelligences will necessarily have any need for the cold vacuum.
In fact, ‘entropy’ comes in many different forms. Cosmic rays are particularly undesirable forms of entropy, micrometeorites more so, and then large asteroids and supernovas are just extremums on this same scale. There is always something. A planetary atmosphere and crust provides some nice free armor.
But anyway, I digress. I’m not even absolutely certain there will always be limiting factors, but I’d bet that way. I’d bet that in the long term rare materials are a limiting factor, energy cost is still a limiting factor—but mainly just in terms of energy costs of construction rather than operation, and isolation/cooling/protection is something of a limiting factor, but these may be looking at the problem in the wrong light.
Bigger limiting factors for future hyper-intelligences may be completely non-material—such as proximity to exiting knowledge/computational clusters, and ultimately—novelty (new information).
For example, you could compute a googleplex per second and still be the dumbest hyperintelligence on the block if you are stuck with only human sensory capacities and a slow, high latency connection to other hyperintelligences and knowledge sources.
I’ve thought a little more on how to assign a likelihood to known physics (bayesian evidence and a universal prior) and it led me to the inescapable conclusion that we are still a ways away from final physics. In fact, in the process I’ve been reading up more on QM and it led me to realize that whole tracts of it are .. on the wrong track.
The universal prior as applied to physics is a whole topic in of itself, but it is the best guiding principle as to what ultimate final physics will allow. Creation of baby universes is dependent on GR and a prediction of loop quantum gravity in particular, I haven’t gotten to those maths yet. A more basic first question might be something like—which is more a prior likely—analog or digital and by how much? I’m betting digital, but if analog is not ruled out by the UP it could allow for unlimited local computation in principle, as one example.
That violates the second law of thermodynamics unless you discover an infinite heat sink, which requires a specific type of new physics.
This all depends on what is being limited by these factors, which is your values. If you value sentient life, you need computing power. If you value novelty and learning, you also need computing power, but there might be diminishing returns (of course, it is not inconsistent to value sentience with diminishing returns, though most humans who do are inconstant).
I’m skeptical of this. Can you show your work? I’m particularly doubtful of your opinions on QM, unless they’re based on some interesting point about induction, in which case I’m only as doubtful of that as I am of the rest of this paragraph.
No, the only thing baby universes and LQG have in common is that Lee Smolin studies them. He hypothesized baby universes not based on LQG, but because they allow a form of natural selection that has a chance of predicting life-filled universes without having to think about anthropic considerations. This seems like a horribly confused reason. The theory has no evidence in its favour, so it probability is not higher than its prior. In fact, according to Smolin’s Wikipedia page, it has been falsified by a discovery that the mass of the strange quark is not tuned for optimal black hole production.
If a prior prohibits an analog universe, than it is a suboptimal prior.