Although I think the assumption that economic growth demands endlessly increasing material consumption is flawed, it seems natural to imagine that even a maximally efficient economy must use a nonzero number of atoms on average to produce an additional utilon. There must, therefore, be a maximal level of universal utility, which we can approach to within some distance in a finite number of doublings. Since we have enormous amounts of time available, and are also contending with a shrinking amount of access to material resources over time, it seems natural to posit that an extremely long-lived species could reach a point at which the economy simply cannot grow at the same rate.
The timeline you establish here by extrapolating present trends isn’t convincing to me, but I think the basic message that “this can’t go on” is correct. It seems to me that this insight is vastly more important to understand the context of our century than any particular estimate of when we might reach the theoretical limit of utility.
In the limit you are correct: if a utility function assigns a value to every possible arrangement of atoms, then there is some maximum value, and you can’t keep increasing value forever without adding atoms because you will hit the maximum at some point. An economy can be said to be “maximally efficient” when value can’t be added by rearranging its existing atoms, and we must add atoms to produce more value.
However, physics provides very weak upper bounds on the possible value (to humans) of a physical system of given size, because the number of possible physical arrangements of a finite-sized system is enormous. The Bekenstein bound is approximately 2.6e43 * M * R (mass times radius) bits per kg * m. Someone who understands QM should correct me here, but just as an order-of-magnitude-of-order-of-magnitude estimation, our galaxy masses around 1e44 Kg with a radius of 1e18 meters, so its arrangement in a black hole can contain up to 2.6e105 bits of information.
Those are bits; the number of states is 2^(2.6e105). That is much, much bigger than the OP’s 3e70; we can grow the per-atom value of the overall system state by a factor much bigger than 3e70.
Of course this isn’t a tight argument and there are lots of other things to consider. For example, to get the galaxy into some valuable configuration, we’d need to “use up” part of the same galaxy in the process of changing the configuration of the rest. But from a purely physical perspective, the upper bound on value per atom is enormously high.
ETA: replaced mind-boggling numbers with even bigger mind-boggling numbers after a more careful reading of Wikipedia.
That’s a nice conceptual refinement. It actually swings me in the other direction, making it seem plausible that humans might not have nearly enough time to find the optimum arrangement in their expected lifespan and that this might be a central question.
One possibility is that there is a maximal value tile that is much smaller than “all available atoms” and can be duplicated indefinitely to maximize expected value. So perhaps we don’t need to explore all combinations of atoms to be sure that we’ve achieved the limit of value.
perhaps we don’t need to explore all combinations of atoms to be sure that we’ve achieved the limit of value.
That’s a good point, but how would we know? We would need to prove that a given configuration is of maximal (and tile-able) utility without evaluating the (exponentially bigger) number of configurations of bigger size. And we don’t (and possibly can’t, or shouldn’t) have an exact (mathematical) definition of a Pan-Human Utility Function.
However, a proof isn’t needed to make this happen (for better and for worse). If a local configuration is created which is sufficiently more (universally!) valuable than any other known local configuration, neighbors will start copying it and it will tile the galaxy, possibly ending progress if it’s a stable configuration—even if this configuration is far from the best one possible locally (let alone globally).
In practice, “a wonderful thing was invented, everyone copied it of their own free will, and stayed like that forever because human minds couldn’t conceive of a better world, leaving almost all possible future value on the table” doesn’t worry me nearly as much as other end-of-progress scenarios. The ones where everyone dies seem much more likely.
Although I think the assumption that economic growth demands endlessly increasing material consumption is flawed, it seems natural to imagine that even a maximally efficient economy must use a nonzero number of atoms on average to produce an additional utilon. There must, therefore, be a maximal level of universal utility, which we can approach to within some distance in a finite number of doublings. Since we have enormous amounts of time available, and are also contending with a shrinking amount of access to material resources over time, it seems natural to posit that an extremely long-lived species could reach a point at which the economy simply cannot grow at the same rate.
The timeline you establish here by extrapolating present trends isn’t convincing to me, but I think the basic message that “this can’t go on” is correct. It seems to me that this insight is vastly more important to understand the context of our century than any particular estimate of when we might reach the theoretical limit of utility.
In the limit you are correct: if a utility function assigns a value to every possible arrangement of atoms, then there is some maximum value, and you can’t keep increasing value forever without adding atoms because you will hit the maximum at some point. An economy can be said to be “maximally efficient” when value can’t be added by rearranging its existing atoms, and we must add atoms to produce more value.
However, physics provides very weak upper bounds on the possible value (to humans) of a physical system of given size, because the number of possible physical arrangements of a finite-sized system is enormous. The Bekenstein bound is approximately 2.6e43 * M * R (mass times radius) bits per kg * m. Someone who understands QM should correct me here, but just as an order-of-magnitude-of-order-of-magnitude estimation, our galaxy masses around 1e44 Kg with a radius of 1e18 meters, so its arrangement in a black hole can contain up to 2.6e105 bits of information.
Those are bits; the number of states is 2^(2.6e105). That is much, much bigger than the OP’s 3e70; we can grow the per-atom value of the overall system state by a factor much bigger than 3e70.
Of course this isn’t a tight argument and there are lots of other things to consider. For example, to get the galaxy into some valuable configuration, we’d need to “use up” part of the same galaxy in the process of changing the configuration of the rest. But from a purely physical perspective, the upper bound on value per atom is enormously high.
ETA: replaced mind-boggling numbers with even bigger mind-boggling numbers after a more careful reading of Wikipedia.
That’s a nice conceptual refinement. It actually swings me in the other direction, making it seem plausible that humans might not have nearly enough time to find the optimum arrangement in their expected lifespan and that this might be a central question.
One possibility is that there is a maximal value tile that is much smaller than “all available atoms” and can be duplicated indefinitely to maximize expected value. So perhaps we don’t need to explore all combinations of atoms to be sure that we’ve achieved the limit of value.
Or even in the expected lifetime of the universe.
That’s a good point, but how would we know? We would need to prove that a given configuration is of maximal (and tile-able) utility without evaluating the (exponentially bigger) number of configurations of bigger size. And we don’t (and possibly can’t, or shouldn’t) have an exact (mathematical) definition of a Pan-Human Utility Function.
However, a proof isn’t needed to make this happen (for better and for worse). If a local configuration is created which is sufficiently more (universally!) valuable than any other known local configuration, neighbors will start copying it and it will tile the galaxy, possibly ending progress if it’s a stable configuration—even if this configuration is far from the best one possible locally (let alone globally).
In practice, “a wonderful thing was invented, everyone copied it of their own free will, and stayed like that forever because human minds couldn’t conceive of a better world, leaving almost all possible future value on the table” doesn’t worry me nearly as much as other end-of-progress scenarios. The ones where everyone dies seem much more likely.
Indeed. I think that a serious search for an answer to these questions is probably best left for the “Long Reflection.”