The idea that there is no overall change in complexity over time is just another model, and it clearly fails all postdictions and makes nonsensical short-term predictions.
I never said that there is no change in complexity over time; I just said that some trends in technological growth, such as Moore’s law, will stop too soon for your predictions to work.
You are saying that the singularity is a breakdown of our models rather than a literally infinite rate of grouwth, but earlier you said
Why should exponential acceleration ever peter out? It’s the overall mega-pattern over all of history to date.
and
If you plot it in terms of economic growth, computational growth or just complexity growth, the overall trend of the cosmic calendar is geometric—it ends with an infinity/singularity. I take this as general evidence against acceleration ever ending.
Those were the things that seemed death-spirally to me, but they also seem to contradict what you are saying now. What am I misunderstanding?
You are saying that the singularity is a breakdown of our models rather than a literally infinite rate of growth, but earlier you said
The general change in complexity over time follows a surprisingly predictable pattern or trend. The resulting model predicts that local complexity will continue to accelerate in some narrow branches or sub-pockets of the universe towards a vertical asymptote, where it approaches infinity—a Singularity. We can understand this computationally as the end result of a long chain of recursive self-optimization driving computational systems down to smaller and faster scales until you eventually hit the plank scale barrier. The ultimate physical computer necessarily resembles a small piece of the big bang—a physical Singularity/black hole like entity. Computation/intelligence/complexity approaches infinity within this localized pocket, and at that moment in that region the model breaks down and “something strange happens”. Perhaps this involves the creation of new universes. If that is possible, that would allow complexity to continue to increase without bound in the newly generated bubble universes. So the term Singularity in this model has a very specific physical meaning—as in an actual space-time Singularity resembling a black hole or the Big Bang. That is why I call it “physical singularity”—I don’t mean some vague analogy like “greater than human intelligence”. The physics of singularities is not yet fully determined, so exactly what future hyper-intelligences could do at that level is open/unknown.
The ultimate physical computer necessarily resembles a small piece of the big bang—a physical Singularity/black hole like entity.
Because they are both very dense? That’s hardly a resemblance. You keep making analogies like this, but I do not see what purpose they serve.
Computation/intelligence/complexity approaches infinity within this localized pocket, and at that moment in that region the model breaks down and “something strange happens”. Perhaps this involves the creation of new universes. If that is possible, that would allow complexity to continue to increase without bound in the newly generated bubble universes.
If the model breaks down, than it provides almost no evidence as to whether new universes can be created. This behaviour seems to fit the model better, but, since we already know that the model breaks down, we cannot use it to justify any such predictions.
So the term Singularity in this model has a very specific physical meaning—as in an actual space-time Singularity resembling a black hole or the Big Bang.
We don’t even know if there are singularities at the centre of black holes or at the big bang. Even if there were, there would be no reason to expect a similar singularity would be a necessary part of advanced technology. I do not see how you deduced this and it seems to only be a part of your argument because this phenomenon is described by the same word as a technological singularity.
The ultimate physical computer necessarily resembles a small piece of the big bang—a physical Singularity/black hole like entity.
Because they are both very dense? That’s hardly a resemblance.
I’m not sure how you mean “that’s hardly a resemblance”. If the ultimate physical computer is dense enough to be a gravitational singularity, that is a black hole singularity by definition, not just resemblance. Lookup Set Lloy’ds paper “the ultimate physical limits of computation” for the physics reference on why ultimate computers necessarily involve physical black holes/singularities.
If the model breaks down, than it provides almost no evidence as to whether new universes can be created.
No, for more indirect speculative evidence we will have to wait for physics to advance, which may take a while (at least until AGI comes up to speed). However, this particular type of speculation the model suggests is linked to ideas in physics—see chaotic inflation/bubble universe, selfish biocosm/fecund universe theory, and John Smart’s developmental singularity idea for the overview.
We don’t even know if there are singularities at the centre of black holes or at the big bang.
Singularity here just means model-breakdown and ‘things going to infinity’. If new models remove the infinity than perhaps the ‘Singularity’ goes away, but you still have something approaching infinity. Regardless in the meantime the word “Singularity” is employed.
Even if there were, there would be no reason to expect a similar singularity would be a necessary part of advanced technology.
There are specific, detailed, physical reasons why singularities are natural endpoints to ultimate computational technology-in-theory. (namely they are maximal entropy states, and computation is ultimately entropy-limited—but see earlier mentioned work). Of course, that doesn’t mean that the ultimate practical computational systems will be black holes, but still.
I do not see how you deduced this and it seems to only be a part of your argument because this phenomenon is described by the same word as a technological singularity.
Whoever originally coined the term (Vernor Vinge?) picked Singularity specifically because of the association with model-breakdown in math/physics, but was probably not aware of the full connection to ultimate computational physics, as those results weren’t developed or understood until considerably later.
I am familiar with the work of Seth Lloyd (and that of Wei Dai) on the usefulness of black holes in computing. The singularity in black holes is a different issue than this usefulness.
I read something here recently with a good analogy for this. If someone thinks a whale is a fish, then fishiness is a quality that they would ascribe to whales, but it is not part of their definition of a whale, so they would stop saying that whales were fish if presented with conflicting evidence. Similarly, we have two issues here, technological singularities and mathematical singularities. It turns out that the latter might be useful for the purposes of the former, but it is not part of the definition of the former. I do not know what purpose you are bringing this up for. I feel like we are discussing the behaviour of whales and you keep mentioning that they are mammals. It is true according to our latest science, but it seems irrelevant. In particular, you did not link the claims you made earlier about technology accelerating forever to this discussion of black holes.
You originally said
If you plot it in terms of economic growth, computational growth or just complexity growth, the overall trend of the cosmic calendar is geometric—it ends with an infinity/singularity. I take this as general evidence against acceleration ever ending.
You have since said that the singularity is not literally an infinity, but a breakdown of our models. When I pressed you on this contradiction, you did not really respond, but brought in other issues about black holes and bubble universes, including many extremely speculative proposals. What is your position on this?
I am familiar with the work of Seth Lloyd (and that of Wei Dai) on the usefulness of black holes in computing.
The former work is the particular use connected to our discussion (black hole computers). The second work (Wei Dai’s) is about black hole’s potential use as radiators/entropy dumps.
The singularity in black holes is a different issue than this usefulness.
No, it is the same. The speed and efficiency limitations of computation stem from the speed of light communication barrier, and thus they scale with density (inversely with size). Moore’s law is an exponential increase in information density. If you continue to increase density (packing more information into less space) eventually it leads you to the Bekenstein Bound and a black hole, a gravitational singularity.
If you plot it in terms of economic growth, computational growth or just complexity growth, the overall trend of the cosmic calendar is geometric—it ends with an infinity/singularity. I take this as general evidence against acceleration ever ending.
You have since said that the singularity is not literally an infinity, but a breakdown of our models. When I pressed you on this contradiction, you did not really respond, but brought in other issues about black holes and bubble universes, including many extremely speculative proposals. What is your position on this?
I believe I outlined it in previous replies—if Moore’s Law type exponential information processing density continues to increase past the barrier of molecular computing this eventually leads to (requires) space-time engineering at the level of artificial gravitational singularities (black holes). All of future physics is speculative, but many branches of current speculation in physics for ultimate technologies involve manipulating gravitational singularities. Some possibilities include the creation of new bubble universes which would allow the overall pattern to replicate and continue inside the new universes—a form of multiversal replication—the developmental singularity idea.
Yes, this is all speculation, as is any theory of physical eschatology (such as the theory that we will eventually colonize the galaxy). The original start of all of this was the observation that colonizing the galaxy would amount to an extremely slow rate of growth compared to the historical trend. Growth at the historical pace will require (or predicts) something more radical such as space-time engineering/universal replication.
The former work is the particular use connected to our discussion (black hole computers). The second work (Wei Dai’s) is about black hole’s potential use as radiators/entropy dumps.
The ability to dumping entropy is essential to computing, so Wei Dai’s work is relevant. Limits on entropy dumping provide limits on computation.
All of future physics is speculative, but many branches of current speculation in physics for ultimate technologies involve manipulating gravitational singularities. Some possibilities include the creation of new bubble universes which would allow the overall pattern to replicate and continue inside the new universes—a form of multiversal replication—the developmental singularity idea.
It is possible that we will gain technology that allows us to vastly increase our computing power beyond what is currently known to be possible in principle, but these speculations are only a subset of possible futures. The universe has to be a certain way, and there is no reason to prefer these hypotheses to any others.
The prior probability of unknown physics that lets Moore’s law continue is therefore low.
Yes, this is all speculation, as is any theory of physical eschatology (such as the theory that we will eventually colonize the galaxy). The original start of all of this was the observation that colonizing the galaxy would amount to an extremely slow rate of growth compared to the historical trend. Growth at the historical pace will require (or predicts) something more radical such as space-time engineering/universal replication.
If we observe a trend but we can explain the trend and the explanation point to a specific time where the trend breaks down, then a hypothesis that invokes some effect to make the trend continues does no better a job of explaining our observations then a hypothesis that results in a prediction that the trend will stop.
The odds ratio is therefore about 1:1. This trend gives little evidence. The posterior probability of unknown physics that lets Moore’s law continue is therefore low.
The ability to dumping entropy is essential to computing, so Wei Dai’s work is relevant. Limits on entropy dumping provide limits on computation.
Actually this is not generally true. The ability to dump entropy .. . is simply the ability to dump entropy. In the current dominant framework of irreversible deterministically programmable von neuman architectures, entropy is dumped left and right. Moore’s law for traditional computing will run into this landauer limit relatively soon—this decade or next at the latest, and it will come to a hard end.
However, many algorithms can actually use entropy. Any type of algorithm that can use about as much entropy as it produces can trivially be made fully reversible and approach asymptotic zero net energy dissipation. Monte carlo simulation is a prototypical example, and entropy has similar uses in pattern prediction from compressed knowledge in the domain of AI algorithms.
Furthermore, advanced physics simulations of the type that future upload civilizations would desire can be made trivially reversible because physics itself is reversible. Any state updates and differential equations used in physics simulation are thus reversible and need not even produce any waste entropy. This combined with the potential positive uses of any actual entropy could allow computation in general to continue to advance. The limit only applies to specific classes of computation, and fortunately the most important future domains of massive computation (general simulation and related general intelligence) are fully reversible at zero penalty.
Yes, approaching those limits will require very low temperatures and there will always be some random entropy coming in from the outside on the surface of the computer, but this surface can simply be used as a source entropy circuit.
And finally, moving from deterministic to nondeterministic statistical computation in general further eliminates potential problems with entropy.
Of course there are other limits: there is a fundamental final limit based on QM quantization and the uncertainty principle in the minimum energy required to represent a bit and compute a “bit op”.
That limit is very far away, but miniaturization limits of building any structures out of atoms places a closer soft limit in terms of the energy density that can be contained in a molecular structure. This may limit regular computing out of safe everyday materials to chemical bond energy densities, but we exceed those densities in nuclear reactors and eventually we could achieve those energy densities in computation. And again if the computation is reversible and all entropy is recycled it need not generate any heat (although the result of catastrophic failure of such a system could result in a nuclear-level accident, so this severely constrains the practical economics).
Looking farther ahead we can see that the uncertainty principle does not say that 1 quantum of energy can only use or compute 1 bit. In fact the limits are unimaginably more generous. An interaction (such as a collision) of 2 particles with N bit-states can have on the order 2^N possible output states, so the final ladder is to turn each individual particle into a complex functional mapping or small computer unto-itself. If climbing that ladder is ever practically possible (and it appears to be), it may not technically lead to infinity but it’s close enough. This is all with known physics.
You bring up some interesting points. I do not know whether minds could be made fully reversible in practice (obviously it’s possible in principle, since physics is reversible). The question, however, is not whether negentropy use can be lowered but whether it can be lowered to the point that a different resource, one which does not follow the M^2 power law, is the limiting one. If negentropy use can be lowered, what is the new limiting factor?
For example, you mentioned that many technologies require low temperatures. However, in the absence of perfect shielding against the CMB, this requires a cooling system, which is the same thing as an entropy absorber. The limiting resource in this case is still entropy.
You did not respond the my statement that the posterior probability of unknown physics that lets Moore’s law continue is low. Does this mean you agree? If not, where is the flaw in my argument?
If negentropy use can be lowered, what is the new limiting factor?
I imagine there will always be limiting factors, but they change with knowledge and technology.
I’m fairly sure that entropy can be recycled/managed well enough that heat/entropy issues will not be end limiters. In fact you could probably take reversible computing and entropy recycling to an extreme and make a computer that actually emitted negative net heat—absorbed entropy from the environment. I’m not sure that future hyperintelligences will necessarily have any need for the cold vacuum.
In fact, ‘entropy’ comes in many different forms. Cosmic rays are particularly undesirable forms of entropy, micrometeorites more so, and then large asteroids and supernovas are just extremums on this same scale. There is always something. A planetary atmosphere and crust provides some nice free armor.
But anyway, I digress. I’m not even absolutely certain there will always be limiting factors, but I’d bet that way. I’d bet that in the long term rare materials are a limiting factor, energy cost is still a limiting factor—but mainly just in terms of energy costs of construction rather than operation, and isolation/cooling/protection is something of a limiting factor, but these may be looking at the problem in the wrong light.
Bigger limiting factors for future hyper-intelligences may be completely non-material—such as proximity to exiting knowledge/computational clusters, and ultimately—novelty (new information).
For example, you could compute a googleplex per second and still be the dumbest hyperintelligence on the block if you are stuck with only human sensory capacities and a slow, high latency connection to other hyperintelligences and knowledge sources.
You did not respond the my statement that the posterior probability of unknown physics that lets Moore’s law continue is low. Does this mean you agree? If not, where is the flaw in my argument?
I’ve thought a little more on how to assign a likelihood to known physics (bayesian evidence and a universal prior) and it led me to the inescapable conclusion that we are still a ways away from final physics. In fact, in the process I’ve been reading up more on QM and it led me to realize that whole tracts of it are .. on the wrong track.
The universal prior as applied to physics is a whole topic in of itself, but it is the best guiding principle as to what ultimate final physics will allow. Creation of baby universes is dependent on GR and a prediction of loop quantum gravity in particular, I haven’t gotten to those maths yet. A more basic first question might be something like—which is more a prior likely—analog or digital and by how much? I’m betting digital, but if analog is not ruled out by the UP it could allow for unlimited local computation in principle, as one example.
I’m fairly sure that entropy can be recycled/managed well enough that heat/entropy issues will not be end limiters. In fact you could probably take reversible computing and entropy recycling to an extreme and make a computer that actually emitted negative net heat—absorbed entropy from the environment.
That violates the second law of thermodynamics unless you discover an infinite heat sink, which requires a specific type of new physics.
But anyway, I digress. I’m not even absolutely certain there will always be limiting factors, but I’d bet that way. I’d bet that in the long term rare materials are a limiting factor, energy cost is still a limiting factor—but mainly just in terms of energy costs of construction rather than operation, and isolation/cooling/protection is something of a limiting factor, but these may be looking at the problem in the wrong light.
Bigger limiting factors for future hyper-intelligences may be completely non-material—such as proximity to exiting knowledge/computational clusters, and ultimately—novelty (new information).
This all depends on what is being limited by these factors, which is your values. If you value sentient life, you need computing power. If you value novelty and learning, you also need computing power, but there might be diminishing returns (of course, it is not inconsistent to value sentience with diminishing returns, though most humans who do are inconstant).
I’ve thought a little more on how to assign a likelihood to known physics (bayesian evidence and a universal prior) and it led me to the inescapable conclusion that we are still a ways away from final physics. In fact, in the process I’ve been reading up more on QM and it led me to realize that whole tracts of it are .. on the wrong track.
I’m skeptical of this. Can you show your work? I’m particularly doubtful of your opinions on QM, unless they’re based on some interesting point about induction, in which case I’m only as doubtful of that as I am of the rest of this paragraph.
Creation of baby universes is . . . a prediction of loop quantum gravity in particular, I haven’t gotten to those maths yet.
No, the only thing baby universes and LQG have in common is that Lee Smolin studies them. He hypothesized baby universes not based on LQG, but because they allow a form of natural selection that has a chance of predicting life-filled universes without having to think about anthropic considerations. This seems like a horribly confused reason. The theory has no evidence in its favour, so it probability is not higher than its prior. In fact, according to Smolin’s Wikipedia page, it has been falsified by a discovery that the mass of the strange quark is not tuned for optimal black hole production.
A more basic first question might be something like—which is more a prior likely—analog or digital and by how much? I’m betting digital, but if analog is not ruled out by the UP it could allow for unlimited local computation in principle, as one example.
If a prior prohibits an analog universe, than it is a suboptimal prior.
I never said that there is no change in complexity over time; I just said that some trends in technological growth, such as Moore’s law, will stop too soon for your predictions to work.
You are saying that the singularity is a breakdown of our models rather than a literally infinite rate of grouwth, but earlier you said
and
Those were the things that seemed death-spirally to me, but they also seem to contradict what you are saying now. What am I misunderstanding?
The general change in complexity over time follows a surprisingly predictable pattern or trend. The resulting model predicts that local complexity will continue to accelerate in some narrow branches or sub-pockets of the universe towards a vertical asymptote, where it approaches infinity—a Singularity. We can understand this computationally as the end result of a long chain of recursive self-optimization driving computational systems down to smaller and faster scales until you eventually hit the plank scale barrier. The ultimate physical computer necessarily resembles a small piece of the big bang—a physical Singularity/black hole like entity. Computation/intelligence/complexity approaches infinity within this localized pocket, and at that moment in that region the model breaks down and “something strange happens”. Perhaps this involves the creation of new universes. If that is possible, that would allow complexity to continue to increase without bound in the newly generated bubble universes. So the term Singularity in this model has a very specific physical meaning—as in an actual space-time Singularity resembling a black hole or the Big Bang. That is why I call it “physical singularity”—I don’t mean some vague analogy like “greater than human intelligence”. The physics of singularities is not yet fully determined, so exactly what future hyper-intelligences could do at that level is open/unknown.
Because they are both very dense? That’s hardly a resemblance. You keep making analogies like this, but I do not see what purpose they serve.
If the model breaks down, than it provides almost no evidence as to whether new universes can be created. This behaviour seems to fit the model better, but, since we already know that the model breaks down, we cannot use it to justify any such predictions.
We don’t even know if there are singularities at the centre of black holes or at the big bang. Even if there were, there would be no reason to expect a similar singularity would be a necessary part of advanced technology. I do not see how you deduced this and it seems to only be a part of your argument because this phenomenon is described by the same word as a technological singularity.
I’m not sure how you mean “that’s hardly a resemblance”. If the ultimate physical computer is dense enough to be a gravitational singularity, that is a black hole singularity by definition, not just resemblance. Lookup Set Lloy’ds paper “the ultimate physical limits of computation” for the physics reference on why ultimate computers necessarily involve physical black holes/singularities.
No, for more indirect speculative evidence we will have to wait for physics to advance, which may take a while (at least until AGI comes up to speed). However, this particular type of speculation the model suggests is linked to ideas in physics—see chaotic inflation/bubble universe, selfish biocosm/fecund universe theory, and John Smart’s developmental singularity idea for the overview.
Singularity here just means model-breakdown and ‘things going to infinity’. If new models remove the infinity than perhaps the ‘Singularity’ goes away, but you still have something approaching infinity. Regardless in the meantime the word “Singularity” is employed.
There are specific, detailed, physical reasons why singularities are natural endpoints to ultimate computational technology-in-theory. (namely they are maximal entropy states, and computation is ultimately entropy-limited—but see earlier mentioned work). Of course, that doesn’t mean that the ultimate practical computational systems will be black holes, but still.
Whoever originally coined the term (Vernor Vinge?) picked Singularity specifically because of the association with model-breakdown in math/physics, but was probably not aware of the full connection to ultimate computational physics, as those results weren’t developed or understood until considerably later.
I am familiar with the work of Seth Lloyd (and that of Wei Dai) on the usefulness of black holes in computing. The singularity in black holes is a different issue than this usefulness.
I read something here recently with a good analogy for this. If someone thinks a whale is a fish, then fishiness is a quality that they would ascribe to whales, but it is not part of their definition of a whale, so they would stop saying that whales were fish if presented with conflicting evidence. Similarly, we have two issues here, technological singularities and mathematical singularities. It turns out that the latter might be useful for the purposes of the former, but it is not part of the definition of the former. I do not know what purpose you are bringing this up for. I feel like we are discussing the behaviour of whales and you keep mentioning that they are mammals. It is true according to our latest science, but it seems irrelevant. In particular, you did not link the claims you made earlier about technology accelerating forever to this discussion of black holes.
You originally said
You have since said that the singularity is not literally an infinity, but a breakdown of our models. When I pressed you on this contradiction, you did not really respond, but brought in other issues about black holes and bubble universes, including many extremely speculative proposals. What is your position on this?
The former work is the particular use connected to our discussion (black hole computers). The second work (Wei Dai’s) is about black hole’s potential use as radiators/entropy dumps.
No, it is the same. The speed and efficiency limitations of computation stem from the speed of light communication barrier, and thus they scale with density (inversely with size). Moore’s law is an exponential increase in information density. If you continue to increase density (packing more information into less space) eventually it leads you to the Bekenstein Bound and a black hole, a gravitational singularity.
I believe I outlined it in previous replies—if Moore’s Law type exponential information processing density continues to increase past the barrier of molecular computing this eventually leads to (requires) space-time engineering at the level of artificial gravitational singularities (black holes). All of future physics is speculative, but many branches of current speculation in physics for ultimate technologies involve manipulating gravitational singularities. Some possibilities include the creation of new bubble universes which would allow the overall pattern to replicate and continue inside the new universes—a form of multiversal replication—the developmental singularity idea.
Yes, this is all speculation, as is any theory of physical eschatology (such as the theory that we will eventually colonize the galaxy). The original start of all of this was the observation that colonizing the galaxy would amount to an extremely slow rate of growth compared to the historical trend. Growth at the historical pace will require (or predicts) something more radical such as space-time engineering/universal replication.
The ability to dumping entropy is essential to computing, so Wei Dai’s work is relevant. Limits on entropy dumping provide limits on computation.
It is possible that we will gain technology that allows us to vastly increase our computing power beyond what is currently known to be possible in principle, but these speculations are only a subset of possible futures. The universe has to be a certain way, and there is no reason to prefer these hypotheses to any others.
The prior probability of unknown physics that lets Moore’s law continue is therefore low.
If we observe a trend but we can explain the trend and the explanation point to a specific time where the trend breaks down, then a hypothesis that invokes some effect to make the trend continues does no better a job of explaining our observations then a hypothesis that results in a prediction that the trend will stop.
The odds ratio is therefore about 1:1. This trend gives little evidence. The posterior probability of unknown physics that lets Moore’s law continue is therefore low.
Actually this is not generally true. The ability to dump entropy .. . is simply the ability to dump entropy. In the current dominant framework of irreversible deterministically programmable von neuman architectures, entropy is dumped left and right. Moore’s law for traditional computing will run into this landauer limit relatively soon—this decade or next at the latest, and it will come to a hard end.
However, many algorithms can actually use entropy. Any type of algorithm that can use about as much entropy as it produces can trivially be made fully reversible and approach asymptotic zero net energy dissipation. Monte carlo simulation is a prototypical example, and entropy has similar uses in pattern prediction from compressed knowledge in the domain of AI algorithms.
Furthermore, advanced physics simulations of the type that future upload civilizations would desire can be made trivially reversible because physics itself is reversible. Any state updates and differential equations used in physics simulation are thus reversible and need not even produce any waste entropy. This combined with the potential positive uses of any actual entropy could allow computation in general to continue to advance. The limit only applies to specific classes of computation, and fortunately the most important future domains of massive computation (general simulation and related general intelligence) are fully reversible at zero penalty.
Yes, approaching those limits will require very low temperatures and there will always be some random entropy coming in from the outside on the surface of the computer, but this surface can simply be used as a source entropy circuit.
And finally, moving from deterministic to nondeterministic statistical computation in general further eliminates potential problems with entropy.
Of course there are other limits: there is a fundamental final limit based on QM quantization and the uncertainty principle in the minimum energy required to represent a bit and compute a “bit op”.
That limit is very far away, but miniaturization limits of building any structures out of atoms places a closer soft limit in terms of the energy density that can be contained in a molecular structure. This may limit regular computing out of safe everyday materials to chemical bond energy densities, but we exceed those densities in nuclear reactors and eventually we could achieve those energy densities in computation. And again if the computation is reversible and all entropy is recycled it need not generate any heat (although the result of catastrophic failure of such a system could result in a nuclear-level accident, so this severely constrains the practical economics).
Looking farther ahead we can see that the uncertainty principle does not say that 1 quantum of energy can only use or compute 1 bit. In fact the limits are unimaginably more generous. An interaction (such as a collision) of 2 particles with N bit-states can have on the order 2^N possible output states, so the final ladder is to turn each individual particle into a complex functional mapping or small computer unto-itself. If climbing that ladder is ever practically possible (and it appears to be), it may not technically lead to infinity but it’s close enough. This is all with known physics.
You bring up some interesting points. I do not know whether minds could be made fully reversible in practice (obviously it’s possible in principle, since physics is reversible). The question, however, is not whether negentropy use can be lowered but whether it can be lowered to the point that a different resource, one which does not follow the M^2 power law, is the limiting one. If negentropy use can be lowered, what is the new limiting factor?
For example, you mentioned that many technologies require low temperatures. However, in the absence of perfect shielding against the CMB, this requires a cooling system, which is the same thing as an entropy absorber. The limiting resource in this case is still entropy.
You did not respond the my statement that the posterior probability of unknown physics that lets Moore’s law continue is low. Does this mean you agree? If not, where is the flaw in my argument?
I imagine there will always be limiting factors, but they change with knowledge and technology.
I’m fairly sure that entropy can be recycled/managed well enough that heat/entropy issues will not be end limiters. In fact you could probably take reversible computing and entropy recycling to an extreme and make a computer that actually emitted negative net heat—absorbed entropy from the environment. I’m not sure that future hyperintelligences will necessarily have any need for the cold vacuum.
In fact, ‘entropy’ comes in many different forms. Cosmic rays are particularly undesirable forms of entropy, micrometeorites more so, and then large asteroids and supernovas are just extremums on this same scale. There is always something. A planetary atmosphere and crust provides some nice free armor.
But anyway, I digress. I’m not even absolutely certain there will always be limiting factors, but I’d bet that way. I’d bet that in the long term rare materials are a limiting factor, energy cost is still a limiting factor—but mainly just in terms of energy costs of construction rather than operation, and isolation/cooling/protection is something of a limiting factor, but these may be looking at the problem in the wrong light.
Bigger limiting factors for future hyper-intelligences may be completely non-material—such as proximity to exiting knowledge/computational clusters, and ultimately—novelty (new information).
For example, you could compute a googleplex per second and still be the dumbest hyperintelligence on the block if you are stuck with only human sensory capacities and a slow, high latency connection to other hyperintelligences and knowledge sources.
I’ve thought a little more on how to assign a likelihood to known physics (bayesian evidence and a universal prior) and it led me to the inescapable conclusion that we are still a ways away from final physics. In fact, in the process I’ve been reading up more on QM and it led me to realize that whole tracts of it are .. on the wrong track.
The universal prior as applied to physics is a whole topic in of itself, but it is the best guiding principle as to what ultimate final physics will allow. Creation of baby universes is dependent on GR and a prediction of loop quantum gravity in particular, I haven’t gotten to those maths yet. A more basic first question might be something like—which is more a prior likely—analog or digital and by how much? I’m betting digital, but if analog is not ruled out by the UP it could allow for unlimited local computation in principle, as one example.
That violates the second law of thermodynamics unless you discover an infinite heat sink, which requires a specific type of new physics.
This all depends on what is being limited by these factors, which is your values. If you value sentient life, you need computing power. If you value novelty and learning, you also need computing power, but there might be diminishing returns (of course, it is not inconsistent to value sentience with diminishing returns, though most humans who do are inconstant).
I’m skeptical of this. Can you show your work? I’m particularly doubtful of your opinions on QM, unless they’re based on some interesting point about induction, in which case I’m only as doubtful of that as I am of the rest of this paragraph.
No, the only thing baby universes and LQG have in common is that Lee Smolin studies them. He hypothesized baby universes not based on LQG, but because they allow a form of natural selection that has a chance of predicting life-filled universes without having to think about anthropic considerations. This seems like a horribly confused reason. The theory has no evidence in its favour, so it probability is not higher than its prior. In fact, according to Smolin’s Wikipedia page, it has been falsified by a discovery that the mass of the strange quark is not tuned for optimal black hole production.
If a prior prohibits an analog universe, than it is a suboptimal prior.