I’m not very familiar with the phrasing of that kind of conditioning—are you describing finetuning, with the divide mentioned here? If so, I have a comment there about why I think it might not really be qualitatively different.
I think my picture is slightly different for how self-fulfilling prophecies could occur. For one, I’m not using “inner alignment failure” here to refer to a mesa-optimizer in the traditional sense of the AI trying to achieve optimal loss (I agree that in that case it’d probably be the outcome you describe), but to a case where it’s still just a generative model, but needs some way to resolve the problem of predicting in recursive cases (for example, asking GPT to predict whether the price of a stock would rise or fall). Even for just predicting the next token with high accuracy, it’d need to solve this problem at some point. My prediction is that it’s more likely for it to just model this via modelling increasingly low-fidelity versions of itself in a stack, but it’s also possible for it do fixed-point reasoning (like in the Predict-O-Matic story).
Sorry for the (very) late reply!
I’m not very familiar with the phrasing of that kind of conditioning—are you describing finetuning, with the divide mentioned here? If so, I have a comment there about why I think it might not really be qualitatively different.
I think my picture is slightly different for how self-fulfilling prophecies could occur. For one, I’m not using “inner alignment failure” here to refer to a mesa-optimizer in the traditional sense of the AI trying to achieve optimal loss (I agree that in that case it’d probably be the outcome you describe), but to a case where it’s still just a generative model, but needs some way to resolve the problem of predicting in recursive cases (for example, asking GPT to predict whether the price of a stock would rise or fall). Even for just predicting the next token with high accuracy, it’d need to solve this problem at some point. My prediction is that it’s more likely for it to just model this via modelling increasingly low-fidelity versions of itself in a stack, but it’s also possible for it do fixed-point reasoning (like in the Predict-O-Matic story).