A description of a superintelligence might be quite easy to find in our universe; for example, if the entire universe were tiled with AIs for most of its lifetime, it may not require much information to point to one of them once you have described the universe. If the Kolmogorov complexity of the universe were really only 1000, this could easily beat cousn_it’s suggestion.
A description of a superintelligence might be quite easy to find in our universe; for example, if the entire universe were tiled with AIs for most of its lifetime, it may not require much information to point to one of them once you have described the universe. If the Kolmogorov complexity of the universe were really only 1000, this could easily beat cousn_it’s suggestion.
This is an upper bound not a lower bound. That roughly means that whenever you don’t have proof to the contrary you assume the worst.
What? Cousin_it gave an upper bound. I (and timtyler) are pointing out a plausible way the value could be significantly below that upper bound.
It is true that the grandparent does not fit this context. Mistake!