Re: prompting: So when you talk about “simulating a world,” or “describing some property of a world,” I interpreted that as conditionalizing on a feature of the AI’s latent model of the world, rather than just giving it a prompt like “You are a very smart and human-aligned researcher.” This latter deviates from the former in some pretty important ways, which should probably be considered when evaluating the safety of outputs from generative models.
Re: prophecies: I mean that your training procedure doesn’t give an AI an incentive to make self-fulfilling prophecies. I think you have a picture where an AI with inner alignment failure might choose outputs that are optimal according to the loss function but lead to bad real-world consequences, and that these outputs would look like self-fulfilling prophecies because that’s a way to be accurate while still having degrees of freedom about how to affect the world. I’m saying that the training loss just cares about next-word accuracy, not long term accuracy according to the latent model of the world, and so AI with inner alignment failure might choose outputs that are highly probable according to next word accuracy but lead to bad real-world consequences, and that these outputs would not look like self-fulfilling prophecies.
I’m not very familiar with the phrasing of that kind of conditioning—are you describing finetuning, with the divide mentioned here? If so, I have a comment there about why I think it might not really be qualitatively different.
I think my picture is slightly different for how self-fulfilling prophecies could occur. For one, I’m not using “inner alignment failure” here to refer to a mesa-optimizer in the traditional sense of the AI trying to achieve optimal loss (I agree that in that case it’d probably be the outcome you describe), but to a case where it’s still just a generative model, but needs some way to resolve the problem of predicting in recursive cases (for example, asking GPT to predict whether the price of a stock would rise or fall). Even for just predicting the next token with high accuracy, it’d need to solve this problem at some point. My prediction is that it’s more likely for it to just model this via modelling increasingly low-fidelity versions of itself in a stack, but it’s also possible for it do fixed-point reasoning (like in the Predict-O-Matic story).
Re: prompting: So when you talk about “simulating a world,” or “describing some property of a world,” I interpreted that as conditionalizing on a feature of the AI’s latent model of the world, rather than just giving it a prompt like “You are a very smart and human-aligned researcher.” This latter deviates from the former in some pretty important ways, which should probably be considered when evaluating the safety of outputs from generative models.
Re: prophecies: I mean that your training procedure doesn’t give an AI an incentive to make self-fulfilling prophecies. I think you have a picture where an AI with inner alignment failure might choose outputs that are optimal according to the loss function but lead to bad real-world consequences, and that these outputs would look like self-fulfilling prophecies because that’s a way to be accurate while still having degrees of freedom about how to affect the world. I’m saying that the training loss just cares about next-word accuracy, not long term accuracy according to the latent model of the world, and so AI with inner alignment failure might choose outputs that are highly probable according to next word accuracy but lead to bad real-world consequences, and that these outputs would not look like self-fulfilling prophecies.
Sorry for the (very) late reply!
I’m not very familiar with the phrasing of that kind of conditioning—are you describing finetuning, with the divide mentioned here? If so, I have a comment there about why I think it might not really be qualitatively different.
I think my picture is slightly different for how self-fulfilling prophecies could occur. For one, I’m not using “inner alignment failure” here to refer to a mesa-optimizer in the traditional sense of the AI trying to achieve optimal loss (I agree that in that case it’d probably be the outcome you describe), but to a case where it’s still just a generative model, but needs some way to resolve the problem of predicting in recursive cases (for example, asking GPT to predict whether the price of a stock would rise or fall). Even for just predicting the next token with high accuracy, it’d need to solve this problem at some point. My prediction is that it’s more likely for it to just model this via modelling increasingly low-fidelity versions of itself in a stack, but it’s also possible for it do fixed-point reasoning (like in the Predict-O-Matic story).