perhaps we don’t need to explore all combinations of atoms to be sure that we’ve achieved the limit of value.
That’s a good point, but how would we know? We would need to prove that a given configuration is of maximal (and tile-able) utility without evaluating the (exponentially bigger) number of configurations of bigger size. And we don’t (and possibly can’t, or shouldn’t) have an exact (mathematical) definition of a Pan-Human Utility Function.
However, a proof isn’t needed to make this happen (for better and for worse). If a local configuration is created which is sufficiently more (universally!) valuable than any other known local configuration, neighbors will start copying it and it will tile the galaxy, possibly ending progress if it’s a stable configuration—even if this configuration is far from the best one possible locally (let alone globally).
In practice, “a wonderful thing was invented, everyone copied it of their own free will, and stayed like that forever because human minds couldn’t conceive of a better world, leaving almost all possible future value on the table” doesn’t worry me nearly as much as other end-of-progress scenarios. The ones where everyone dies seem much more likely.
Or even in the expected lifetime of the universe.
That’s a good point, but how would we know? We would need to prove that a given configuration is of maximal (and tile-able) utility without evaluating the (exponentially bigger) number of configurations of bigger size. And we don’t (and possibly can’t, or shouldn’t) have an exact (mathematical) definition of a Pan-Human Utility Function.
However, a proof isn’t needed to make this happen (for better and for worse). If a local configuration is created which is sufficiently more (universally!) valuable than any other known local configuration, neighbors will start copying it and it will tile the galaxy, possibly ending progress if it’s a stable configuration—even if this configuration is far from the best one possible locally (let alone globally).
In practice, “a wonderful thing was invented, everyone copied it of their own free will, and stayed like that forever because human minds couldn’t conceive of a better world, leaving almost all possible future value on the table” doesn’t worry me nearly as much as other end-of-progress scenarios. The ones where everyone dies seem much more likely.
Indeed. I think that a serious search for an answer to these questions is probably best left for the “Long Reflection.”