One difference between optimization power and the folk notion of “intelligence”: Suppose the Village Idiot is told the password of an enormous abandoned online bank account. The Village Idiot now has vastly more optimization power than Einstein does; this optimization power is not based on social status nor raw might, but rather on the actions that the Village Idiot can think of taking (most of which start with logging in to account X with password Y) that don’t occur to Einstein. However, we wouldn’t label the Village Idiot as more intelligent than Einstein.
Is the Principle of Least Action infinitely “intelligent” by your definition? The PLA consistently picks a physical solution to the n-body problem that surprises me in the same way Kasparov’s brilliant moves surprise me: I can’t come up with the exact path the n objects will take, but after I see the path that the PLA chose, I find (for each object) the PLA’s path has a smaller action integral than the best path I could have come up with.
An AI whose only goal is to make sure such-and-such coin will not, the next time it’s flipped, turn up heads, can apply only (slightly less than) 1 bit of optimization pressure by your definition, even if it vaporizes the coin and then builds a Dyson sphere to provide infrastructure and resources for its ongoing efforts to probe the Universe to ensure that it wasn’t tricked and that the coin actually was vaporized as it appeared to be.
One difference between optimization power and the folk notion of “intelligence”: Suppose the Village Idiot is told the password of an enormous abandoned online bank account. The Village Idiot now has vastly more optimization power than Einstein does; this optimization power is not based on social status nor raw might, but rather on the actions that the Village Idiot can think of taking (most of which start with logging in to account X with password Y) that don’t occur to Einstein. However, we wouldn’t label the Village Idiot as more intelligent than Einstein.
Is the Principle of Least Action infinitely “intelligent” by your definition? The PLA consistently picks a physical solution to the n-body problem that surprises me in the same way Kasparov’s brilliant moves surprise me: I can’t come up with the exact path the n objects will take, but after I see the path that the PLA chose, I find (for each object) the PLA’s path has a smaller action integral than the best path I could have come up with.
An AI whose only goal is to make sure such-and-such coin will not, the next time it’s flipped, turn up heads, can apply only (slightly less than) 1 bit of optimization pressure by your definition, even if it vaporizes the coin and then builds a Dyson sphere to provide infrastructure and resources for its ongoing efforts to probe the Universe to ensure that it wasn’t tricked and that the coin actually was vaporized as it appeared to be.