Anything sufficiently far enough away from you is causally isolated from you. Because of the fundamental constraints of physics, information from there can never reach here, and vice versa. you may as well be in separate universes.
The performance of AlphaGo got me thinking about algorithms we can’t access. In the case of AlphaGo, we implemented the algorithm (AlphaGo) which discovered some strategies we could never have created. (Go Master Ke Jie famously said “I would go as far as to say not a single human has touched the edge of the truth of Go.”)
Perhaps we can imagine a sort of “logical causal isolation.” An algorithm is logically causally isolated from us if we cannot discover it (e.g. in the case of the Go strategies that AlphaGo used) and we cannot specify an algorithm to discover it (except by random accident) given finite computation over a finite time horizon (i.e. in the lifetime of the observable universe).
Importantly, we can devise algorithms which search the entire space of algorithms (e.g. generate all permutations all possible strings of bits less than length n as n approaches infinity), but there’s little reason to expect that such a strategy will result in any useful outputs of some finite length (there appear to be enough atoms in the universe (1080) to represent all possible algorithms of lengthlog2(1080)≈265.
There’s one important weakness in LCI (that doesn’t exist in Physical Causal Isolation). We can randomly jump to algorithms of arbitrary lengths. This stipulation gives us the weird ability to pull stuff from outside our LCI-cone into it. Unfortunately, we cannot do so with the expectation of arriving at a useful algorithm. (There’s an interesting question about which I haven’t yet thought about the distribution of useful algorithms of a given length.) Hence we must add the caveat to our definition of LCI “except by random accident.”
We aren’t LCI’d from the strategies AlphaGo used, because we created AlphaGo and AlphaGo discovered those strategies (even if human Go masters may never have discovered them independently). I wonder what algorithms exist beyond not just our horizons, but the horizons of all the algorithms which descend from everything we are able to compute.
If it’s not fast enough, it doesn’t matter how good it is
If we don’t know what it’s good for, it doesn’t matter how good it is (until we figure that out)
Unfortunately, we cannot do so with the expectation of arriving at a useful algorithm.
Part of the issue with this might be programs that don’t work or do anything (Beyond the trivial, it’s not clear how to select for this, outside of something like AlphaGo.)
If it’s not fast enough, it doesn’t matter how good it is
Sure! My brute-force bitwise algorithm generator won’t be fast enough to generate any algorithm of length 300 bits, and our universe probably can’t support any representation of any algorithm of length greater than (the number of atoms in the observable universe) ~ 10^82 bits. (I don’t know much about physics, so this could be very wrong, but think of it as a useful bound. If there’s a better one (e.g. number of Planck volumes in the observable universe), substitute that and carry on, and also please let me know!)
Part of the issue with this might be programs that don’t work or do anything (Beyond the trivial, it’s not clear how to select for this, outside of something like AlphaGo.)
Another class of algorithms that cause problems are those that don’t do anything useful for some number of computations, after which they begin to output something useful. We don’t really get to know if they will halt, so if the useful structure emerges after some number of steps, we may not be committed to or able to run it that long.
I’m not a physicist either, but quantum mechanics might change the limits. (If it scales, though this might leave input and output limits; if the quantum computer can’t store the output in classical mode, then it’s ability to run the program probably doesn’t matter. This might make less efficient crypto systems more secure, by virtue of size.*)
Anything sufficiently far enough away from you is causally isolated from you. Because of the fundamental constraints of physics, information from there can never reach here, and vice versa. you may as well be in separate universes.
The performance of AlphaGo got me thinking about algorithms we can’t access. In the case of AlphaGo, we implemented the algorithm (AlphaGo) which discovered some strategies we could never have created. (Go Master Ke Jie famously said “I would go as far as to say not a single human has touched the edge of the truth of Go.”)
Perhaps we can imagine a sort of “logical causal isolation.” An algorithm is logically causally isolated from us if we cannot discover it (e.g. in the case of the Go strategies that AlphaGo used) and we cannot specify an algorithm to discover it (except by random accident) given finite computation over a finite time horizon (i.e. in the lifetime of the observable universe).
Importantly, we can devise algorithms which search the entire space of algorithms (e.g.
generate all permutations all possible strings of bits less than length n as n approaches infinity
), but there’s little reason to expect that such a strategy will result in any useful outputs of some finite length (there appear to be enough atoms in the universe (1080) to represent all possible algorithms of length log2(1080)≈265.There’s one important weakness in LCI (that doesn’t exist in Physical Causal Isolation). We can randomly jump to algorithms of arbitrary lengths. This stipulation gives us the weird ability to pull stuff from outside our LCI-cone into it. Unfortunately, we cannot do so with the expectation of arriving at a useful algorithm. (There’s an interesting question about which I haven’t yet thought about the distribution of useful algorithms of a given length.) Hence we must add the caveat to our definition of LCI “except by random accident.”
We aren’t LCI’d from the strategies AlphaGo used, because we created AlphaGo and AlphaGo discovered those strategies (even if human Go masters may never have discovered them independently). I wonder what algorithms exist beyond not just our horizons, but the horizons of all the algorithms which descend from everything we are able to compute.
2 things necessary for an algorithm to be useful:
If it’s not fast enough, it doesn’t matter how good it is
If we don’t know what it’s good for, it doesn’t matter how good it is (until we figure that out)
Part of the issue with this might be programs that don’t work or do anything (Beyond the trivial, it’s not clear how to select for this, outside of something like AlphaGo.)
Sure! My brute-force bitwise algorithm generator won’t be fast enough to generate any algorithm of length 300 bits, and our universe probably can’t support any representation of any algorithm of length greater than (the number of atoms in the observable universe) ~ 10^82 bits. (I don’t know much about physics, so this could be very wrong, but think of it as a useful bound. If there’s a better one (e.g. number of Planck volumes in the observable universe), substitute that and carry on, and also please let me know!)
Another class of algorithms that cause problems are those that don’t do anything useful for some number of computations, after which they begin to output something useful. We don’t really get to know if they will halt, so if the useful structure emerges after some number of steps, we may not be committed to or able to run it that long.
I’m not a physicist either, but quantum mechanics might change the limits. (If it scales, though this might leave input and output limits; if the quantum computer can’t store the output in classical mode, then it’s ability to run the program probably doesn’t matter. This might make less efficient crypto systems more secure, by virtue of size.*)
*Want your messages to be more secure? Padding.
Want your key more secure? Length.