This reminds me of the non-existence of a perfect encryption algorithm, where an encryption algorithm is a bijective map S → S, where S is the set of finite strings on a given alphabet. The image of strings of length at most n cannot lie in strings of length at most n-1, so either no string gets compressed (reduced in length) or there will be some strings that will become longer after compression.
I think this can be solved in practice by heeding the assumption that a very sparse subset of all such strings will be mapped by our encryption algorithm when embedded physically. Then if we low-dimensionally parametrize hash functions of the form above, we can store the parameters for choosing a suitable hash function along with the encrypted text, and our algorithm only produces compressed strings of greater length if we try to encrypt more than some constant percentage of all possible length ⇐ n strings, with n fixed (namely, when we saturate suitable choices of parameters). If this constant is anywhere within a few orders of magnitude of 1, the algorithm is then always compressive in physical practice by finiteness of matter (we won’t ever have enough physical bits to represent that percentage of strings simultaneously).
Maybe a similar argument can be made for Omega? If Omega must be made of matter, we can always pick a decision theory given the finiteness of actual Omega’s as implemented in physics. Of course, there may be no algorithm for choosing the optimal decision theory if Omega is allowed to lie unless we can see Omega’s source code, even though a good choice exists.
I don’t understand what you’re saying here. You mean you pick a has function, and then store both the hashed value and the parameters for the hash function? But hash functions are lossy.
Why does perfection in an encryption algorithm require compression? Did you mean to say “compression algorithm”?
Some notes on compression algorithms:
If one defines a compression algorithm as a bijective map on all strings, and a decompression algorithm as an inverse of a compression algorithm, then according to this definition, all decompression algorithm are also compression algorithms.
Suppose we apply a compression algorithm to a string, and then apply the algorithm to the result, and so on and so on. Suppose we call this the orbit of the string. Every orbit will be finite or infinite. A finite orbit will eventually come back to the original string. The length of strings in an orbit cannot be strictly monotonically decreasing, and if they are constant, then the orbit must be finite. In an infinite orbit, the length of the strings will tend towards infinity. So every compression algorithm will either simply cycle between some strings, or eventually makes things bigger.
I don’t see how this is comparable. It’s not like you’re shuffling around bad decisions and good decisions into an optimal arrangement. You just try to make better decisions, and the bad decisions can go take a hike.
This reminds me of the non-existence of a perfect encryption algorithm, where an encryption algorithm is a bijective map S → S, where S is the set of finite strings on a given alphabet. The image of strings of length at most n cannot lie in strings of length at most n-1, so either no string gets compressed (reduced in length) or there will be some strings that will become longer after compression.
I think this can be solved in practice by heeding the assumption that a very sparse subset of all such strings will be mapped by our encryption algorithm when embedded physically. Then if we low-dimensionally parametrize hash functions of the form above, we can store the parameters for choosing a suitable hash function along with the encrypted text, and our algorithm only produces compressed strings of greater length if we try to encrypt more than some constant percentage of all possible length ⇐ n strings, with n fixed (namely, when we saturate suitable choices of parameters). If this constant is anywhere within a few orders of magnitude of 1, the algorithm is then always compressive in physical practice by finiteness of matter (we won’t ever have enough physical bits to represent that percentage of strings simultaneously).
Maybe a similar argument can be made for Omega? If Omega must be made of matter, we can always pick a decision theory given the finiteness of actual Omega’s as implemented in physics. Of course, there may be no algorithm for choosing the optimal decision theory if Omega is allowed to lie unless we can see Omega’s source code, even though a good choice exists.
I don’t understand what you’re saying here. You mean you pick a has function, and then store both the hashed value and the parameters for the hash function? But hash functions are lossy.
Why does perfection in an encryption algorithm require compression? Did you mean to say “compression algorithm”?
Some notes on compression algorithms: If one defines a compression algorithm as a bijective map on all strings, and a decompression algorithm as an inverse of a compression algorithm, then according to this definition, all decompression algorithm are also compression algorithms. Suppose we apply a compression algorithm to a string, and then apply the algorithm to the result, and so on and so on. Suppose we call this the orbit of the string. Every orbit will be finite or infinite. A finite orbit will eventually come back to the original string. The length of strings in an orbit cannot be strictly monotonically decreasing, and if they are constant, then the orbit must be finite. In an infinite orbit, the length of the strings will tend towards infinity. So every compression algorithm will either simply cycle between some strings, or eventually makes things bigger.
Yes, thank you, I meant compression algorithm.
That’s the precise NFL I had in mind.
I don’t see how this is comparable. It’s not like you’re shuffling around bad decisions and good decisions into an optimal arrangement. You just try to make better decisions, and the bad decisions can go take a hike.
(edit: basically, what Wedifrid said at 2:25)
Yeah, as I noted above. It’s reasonably obvious I’m not actually any sort of mathematician :-)