I think this can be solved in practice by heeding the assumption that a very sparse subset of all such strings will be mapped by our encryption algorithm when embedded physically. Then if we low-dimensionally parametrize hash functions of the form above, we can store the parameters for choosing a suitable hash function along with the encrypted text, and our algorithm only produces compressed strings of greater length if we try to encrypt more than some constant percentage of all possible length ⇐ n strings, with n fixed (namely, when we saturate suitable choices of parameters). If this constant is anywhere within a few orders of magnitude of 1, the algorithm is then always compressive in physical practice by finiteness of matter (we won’t ever have enough physical bits to represent that percentage of strings simultaneously).
Maybe a similar argument can be made for Omega? If Omega must be made of matter, we can always pick a decision theory given the finiteness of actual Omega’s as implemented in physics. Of course, there may be no algorithm for choosing the optimal decision theory if Omega is allowed to lie unless we can see Omega’s source code, even though a good choice exists.
I don’t understand what you’re saying here. You mean you pick a has function, and then store both the hashed value and the parameters for the hash function? But hash functions are lossy.
I think this can be solved in practice by heeding the assumption that a very sparse subset of all such strings will be mapped by our encryption algorithm when embedded physically. Then if we low-dimensionally parametrize hash functions of the form above, we can store the parameters for choosing a suitable hash function along with the encrypted text, and our algorithm only produces compressed strings of greater length if we try to encrypt more than some constant percentage of all possible length ⇐ n strings, with n fixed (namely, when we saturate suitable choices of parameters). If this constant is anywhere within a few orders of magnitude of 1, the algorithm is then always compressive in physical practice by finiteness of matter (we won’t ever have enough physical bits to represent that percentage of strings simultaneously).
Maybe a similar argument can be made for Omega? If Omega must be made of matter, we can always pick a decision theory given the finiteness of actual Omega’s as implemented in physics. Of course, there may be no algorithm for choosing the optimal decision theory if Omega is allowed to lie unless we can see Omega’s source code, even though a good choice exists.
I don’t understand what you’re saying here. You mean you pick a has function, and then store both the hashed value and the parameters for the hash function? But hash functions are lossy.