--You might be right that an AI which assumes it isn’t in a simulation is OK—but I think it’s too early to conclude that yet. We should think more about acausal trade before concluding it’s something we can safely ignore, even temporarily. There’s a good general heuristic of “Don’t make your AI assume things which you think might not be true” and I don’t think we have enough reason to violate it yet.
--You say
For every AI-specification built with the abstraction “Given some finite training data D, the AI predicts the next data point X according to how common it is that X follows D across the multiverse”, I think that AI is going to be misaligned (unless it’s trained with data that we can’t get our hands on, e.g. infinite in-distribution data), because of the standard universal-prior-is-misaligned-reasons. I think this holds true even if we’re trying to predict humans like in IDA. Thus, this definition of “optimal performance” doesn’t seem useful at all.
Isn’t that exactly the point of the universal prior is misaligned argument? The whole point of the argument is that this abstraction/specification (and related ones) is dangerous. So… I guess your title made it sound like you were teaching us something new about prediction (as in, prediction can be outer aligned at optimum) when really you are just arguing that we should change the definition of outer-aligned-at-optimum, and your argument is that the current definition makes outer alignment too hard to achieve? If this is a fair summary of what you are doing, then I retract my objections I guess, and reflect more.
Isn’t that exactly the point of the universal prior is misaligned argument? The whole point of the argument is that this abstraction/specification (and related ones) is dangerous.
Yup.
I guess your title made it sound like you were teaching us something new about prediction (as in, prediction can be outer aligned at optimum) when really you are just arguing that we should change the definition of outer-aligned-at-optimum, and your argument is that the current definition makes outer alignment too hard to achieve
I mean, it’s true that I’m mostly just trying to clarify terminology. But I’m not necessarily trying to propose a new definition – I’m saying that the existing definition already implies that malign priors are an inner alignment problem, rather than than an issue with outer alignment. Evan’s footnote requires the model to perform optimally on everything it actually encounters in the real world (rather than asking it to do as well as it can across the multiverse, given its training data); so that definition doesn’t have a problem with malign priors. And as Richard notes here, common usage of “inner alignment” refers to any case where the model performs well on the training data but is misaligned during deployment, which definitely includes problems with malign priors. And per Rohin’s comment on this post, apparently he already agrees that malign priors are an inner alignment problem.
Basically, the main point of the post is just that the 11 proposals post is wrong about mentioning malign priors as a problem with outer alignment. And then I attached 3 sections of musings that came up when trying to write that :)
Well, at this point I feel foolish for arguing about semantics. I appreciate your post, and don’t have a problem with saying that the malignity problem is an inner alignment problem. (That is zero evidence that it isn’t also an outer alignment problem though!)
Evan’s footnote-definition doesn’t rule out malign priors unless we assume that the real world isn’t a simulation. We may have good pragmatic reasons to act as if it isn’t, but I still think you are changing the definition of outer alignment if you think it assumes we aren’t in a simulation. But *shrug* if that’s what people want to do, then that’s fine I guess, and I’ll change my usage to conform with the majority.
Cool, seems reasonable. Here are some minor responses: (perhaps unwisely, given that we’re in a semantics labyrinth)
Evan’s footnote-definition doesn’t rule out malign priors unless we assume that the real world isn’t a simulation
Idk, if the real world is a simulation made by malign simulators, I wouldn’t say that an AI accurately predicting the world is falling prey to malign priors. I would probably want my AI to accurately predict the world I’m in even if it’s simulated. The simulators control everything that happens anyway, so if they want our AIs to behave in some particular way, they can always just make them do that no matter what we do.
you are changing the definition of outer alignment if you think it assumes we aren’t in a simulation
Fwiw, I think this is true for a definition that always assumes that we’re outside a simulation, but I think it’s in line with previous definitions to say that the AI should think we’re not in a simulation iff we’re not in a simulation. That’s just stipulating unrealistically competetent prediction. Another way to look at it is that in the limit of infinite in-distribution data, an AI may well never be able to tell whether we’re in the real world or in a simulation that’s identical to the real world; but they would be able to tell whether we’re in a simulation with simulators who actually intervene, because it would see them intervening somewhere in its infinite dataset. And that’s the type of simulators that we care about. So definitions of outer alignment that appeal to infinite data automatically assumes that AIs would be able to tell the difference between worlds that are functionally like the real world, and worlds with intervening simulators.
And then, yeah, in practice I agree we won’t be able to learn whether we’re in a simulation or not, because we can’t guarantee in-distribution data. So this is largely semantics. But I do think definitions like this end up being practically useful, because convincing the agent that it’s not individually being simulated is already an inner alignment issue, for malign-prior-reasons, and this is very similar.
Thanks, this is helpful.
--You might be right that an AI which assumes it isn’t in a simulation is OK—but I think it’s too early to conclude that yet. We should think more about acausal trade before concluding it’s something we can safely ignore, even temporarily. There’s a good general heuristic of “Don’t make your AI assume things which you think might not be true” and I don’t think we have enough reason to violate it yet.
--You say
Isn’t that exactly the point of the universal prior is misaligned argument? The whole point of the argument is that this abstraction/specification (and related ones) is dangerous. So… I guess your title made it sound like you were teaching us something new about prediction (as in, prediction can be outer aligned at optimum) when really you are just arguing that we should change the definition of outer-aligned-at-optimum, and your argument is that the current definition makes outer alignment too hard to achieve? If this is a fair summary of what you are doing, then I retract my objections I guess, and reflect more.
Yup.
I mean, it’s true that I’m mostly just trying to clarify terminology. But I’m not necessarily trying to propose a new definition – I’m saying that the existing definition already implies that malign priors are an inner alignment problem, rather than than an issue with outer alignment. Evan’s footnote requires the model to perform optimally on everything it actually encounters in the real world (rather than asking it to do as well as it can across the multiverse, given its training data); so that definition doesn’t have a problem with malign priors. And as Richard notes here, common usage of “inner alignment” refers to any case where the model performs well on the training data but is misaligned during deployment, which definitely includes problems with malign priors. And per Rohin’s comment on this post, apparently he already agrees that malign priors are an inner alignment problem.
Basically, the main point of the post is just that the 11 proposals post is wrong about mentioning malign priors as a problem with outer alignment. And then I attached 3 sections of musings that came up when trying to write that :)
Well, at this point I feel foolish for arguing about semantics. I appreciate your post, and don’t have a problem with saying that the malignity problem is an inner alignment problem. (That is zero evidence that it isn’t also an outer alignment problem though!)
Evan’s footnote-definition doesn’t rule out malign priors unless we assume that the real world isn’t a simulation. We may have good pragmatic reasons to act as if it isn’t, but I still think you are changing the definition of outer alignment if you think it assumes we aren’t in a simulation. But *shrug* if that’s what people want to do, then that’s fine I guess, and I’ll change my usage to conform with the majority.
Cool, seems reasonable. Here are some minor responses: (perhaps unwisely, given that we’re in a semantics labyrinth)
Idk, if the real world is a simulation made by malign simulators, I wouldn’t say that an AI accurately predicting the world is falling prey to malign priors. I would probably want my AI to accurately predict the world I’m in even if it’s simulated. The simulators control everything that happens anyway, so if they want our AIs to behave in some particular way, they can always just make them do that no matter what we do.
Fwiw, I think this is true for a definition that always assumes that we’re outside a simulation, but I think it’s in line with previous definitions to say that the AI should think we’re not in a simulation iff we’re not in a simulation. That’s just stipulating unrealistically competetent prediction. Another way to look at it is that in the limit of infinite in-distribution data, an AI may well never be able to tell whether we’re in the real world or in a simulation that’s identical to the real world; but they would be able to tell whether we’re in a simulation with simulators who actually intervene, because it would see them intervening somewhere in its infinite dataset. And that’s the type of simulators that we care about. So definitions of outer alignment that appeal to infinite data automatically assumes that AIs would be able to tell the difference between worlds that are functionally like the real world, and worlds with intervening simulators.
And then, yeah, in practice I agree we won’t be able to learn whether we’re in a simulation or not, because we can’t guarantee in-distribution data. So this is largely semantics. But I do think definitions like this end up being practically useful, because convincing the agent that it’s not individually being simulated is already an inner alignment issue, for malign-prior-reasons, and this is very similar.