Do you mean to say that a prophecy might happen to be self-fulfilling even if it wasn’t optimized for being so? Or are you trying to distinguish between “explicit” and “implicit” searches for fixed points?
More the second than the first, but I’m also saying that the line between the two is blurry.
For example, suppose there is someone who will often do what predict-o-matic predicts if they can understand how to do it. They often ask it what they are going to do. At first, predict-o-matic predicts them as usual. This modifies their behavior to be somewhat more predictable than it normally would be. Predict-o-matic locks into the patterns (especially the predictions which work the best as suggestions). Behavior gets even more regular. And so on.
You could say that no one is optimizing for fixed-point-ness here, and predict-o-matic is just chancing into it. But effectively, there’s an optimization implemented by the pair of the predict-o-matic and the person.
In situations like that, you get into an optimized fixed point over time, even though the learning algorithm itself isn’t explicitly searching for that.
In situations like that, you get into an optimized fixed point over time, even though the learning algorithm itself isn’t explicitly searching for that.
Note, if the prediction algorithm anticipates this process (perhaps partially), it will “jump ahead”, so that convergence to a fixed point happens more within the computation of the predictor (less over steps of real world interaction). This isn’t formally the same as searching for fixed points internally (you will get much weaker guarantees out of this haphazard process), but it does mean optimization for fixed point finding is happening within the system under some conditions.
More the second than the first, but I’m also saying that the line between the two is blurry.
For example, suppose there is someone who will often do what predict-o-matic predicts if they can understand how to do it. They often ask it what they are going to do. At first, predict-o-matic predicts them as usual. This modifies their behavior to be somewhat more predictable than it normally would be. Predict-o-matic locks into the patterns (especially the predictions which work the best as suggestions). Behavior gets even more regular. And so on.
You could say that no one is optimizing for fixed-point-ness here, and predict-o-matic is just chancing into it. But effectively, there’s an optimization implemented by the pair of the predict-o-matic and the person.
In situations like that, you get into an optimized fixed point over time, even though the learning algorithm itself isn’t explicitly searching for that.
To highlight the “blurry distinction” more:
Note, if the prediction algorithm anticipates this process (perhaps partially), it will “jump ahead”, so that convergence to a fixed point happens more within the computation of the predictor (less over steps of real world interaction). This isn’t formally the same as searching for fixed points internally (you will get much weaker guarantees out of this haphazard process), but it does mean optimization for fixed point finding is happening within the system under some conditions.