Due to the ordinary arguments about the universal prior being malign, this wouldn’t be outer aligned at optimum. Since this definition would mean that almost nothing is outer aligned, it seems like a bad definition. …
As far as practical consequences go, I think this should be treated the same as the more general problem of the universal prior being malign. Thus, I’d like to categorise it as a problem with inner alignment; and I’d like to assume that an AI that’s outer aligned at optimum would act like it’s not in a simulation, if it is in fact not in a simulation.
This happens by default if our chosen definition of optimal performance treats being-in-a-simulation as a fixed fact about its environment – that the AI is expected to know – and not as a source of uncertainty. I think my preferred solutions above capture this by default[3]. For any solution based on how humans generalise, though, it would be important that the humans condition on not being in a simulation.
This is unsatisfying to me. First you say that we can’t define optimum in the obvious way because then very few things would be outer aligned, then you say we should define optimum in such a way that the only way to be outer aligned is to assume you aren’t in a simulation. (How else would we get an AI that act’s like it’s not in a simulation, if it is in fact not in a simulation? You can’t tell whether you are in a simulation or not, by definition, so the only way for such an AI to exist is for it to always act like it’s not in a simulation, i.e. to assume.) An AI that assumes it isn’t in a simulation seems like a defective AI to me, so it’s weird to build that in to the definition of outer alignment.
Things I believe about what sort of AI we want to build:
It would be kind of convenient if we had an AI that could help us do acausal trade. If assuming that it’s not in a simulation would preclude an AI from doing acausal trade, that’s a bit inconvenient. However, I don’t think this matters for the discussion at hand, for reasons I describe in the final array of bullet points below.
Even if it did matter, I don’t think that the ability to do acausal trade is a deal-breaker. If we had a corrigible, aligned, superintelligent AI that couldn’t do acausal trade, we could ask it to scan our brains, then compete through any competitive period on Earth / in space, and eventually recreate us and give us enough time to figure out this acausal trade thing ourselves. Thus, for practical purposes, an AI that assumes it isn’t in a simulation doesn’t seem defective to me, even if that means it can’t do acausal trade.
Things I believe about how to choose definitions:
When choosing how to define our terms, we should choose based on what abstractions are most useful for the task at hand. For the outer-alignment-at-optimum vs inner alignment distinction, we’re trying to choose a definition of “optimal performance” such that we can separately:
Design an intent-aligned AI out of idealised training procedures that always yield “optimal performance” on some metric. If we successfully do this, we’ve solved outer alignment.
Figure out a training procedure that produces an AI that actually does very well on the chosen metric (sufficiently well to be aligned, even if it doesn’t achieve absolute optimal performance). If we do this, we’ve solved inner alignment.
Things I believe about what these candidate definitions would imply:
For every AI-specification built with the abstraction “Given some finite training data D, the AI predicts the next data point X according to how common it is that X follows D across the multiverse”, I think that AI is going to be misaligned (unless it’s trained with data that we can’t get our hands on, e.g. infinite in-distribution data), because of the standard universal-prior-is-misaligned-reasons. I think this holds true even if we’re trying to predict humans like in IDA. Thus, this definition of “optimal performance” doesn’t seem useful at all.
For AI-specification built with the abstraction “Given some finite training data D, the AI predicts the next data point X according to how common it is that X follows D on Earth if we aren’t in a simulation”, I think it probably is possible to build aligned AIs. Since it also doesn’t seem impossible to train AIs to do something like this (ie we haven’t just moved the impossibility to the inner alignment part of the problem), it seems like a pretty good definition of “optimal performance”.
Surprisingly, I think it’s even possible to build AIs that do assign some probability to being in a simulation out of this. E.g. we could train the AI via imitation learning to imitate me (Lukas). I assign a decent probability to being in a simulation, so a perfect Lukas-imitator would also assign a decent probability to being in a simulation. This is true even if the Lukas-imitator is just trying to imitate the real-world Lukas as opposed to the simulated Lukas, because real-world Lukas assigns some probability to being simulated, in his ignorance.
I’m also open to other definitions of “optimal performance”. I just don’t know any useful ones other than the ones I mention in the post.
--You might be right that an AI which assumes it isn’t in a simulation is OK—but I think it’s too early to conclude that yet. We should think more about acausal trade before concluding it’s something we can safely ignore, even temporarily. There’s a good general heuristic of “Don’t make your AI assume things which you think might not be true” and I don’t think we have enough reason to violate it yet.
--You say
For every AI-specification built with the abstraction “Given some finite training data D, the AI predicts the next data point X according to how common it is that X follows D across the multiverse”, I think that AI is going to be misaligned (unless it’s trained with data that we can’t get our hands on, e.g. infinite in-distribution data), because of the standard universal-prior-is-misaligned-reasons. I think this holds true even if we’re trying to predict humans like in IDA. Thus, this definition of “optimal performance” doesn’t seem useful at all.
Isn’t that exactly the point of the universal prior is misaligned argument? The whole point of the argument is that this abstraction/specification (and related ones) is dangerous. So… I guess your title made it sound like you were teaching us something new about prediction (as in, prediction can be outer aligned at optimum) when really you are just arguing that we should change the definition of outer-aligned-at-optimum, and your argument is that the current definition makes outer alignment too hard to achieve? If this is a fair summary of what you are doing, then I retract my objections I guess, and reflect more.
Isn’t that exactly the point of the universal prior is misaligned argument? The whole point of the argument is that this abstraction/specification (and related ones) is dangerous.
Yup.
I guess your title made it sound like you were teaching us something new about prediction (as in, prediction can be outer aligned at optimum) when really you are just arguing that we should change the definition of outer-aligned-at-optimum, and your argument is that the current definition makes outer alignment too hard to achieve
I mean, it’s true that I’m mostly just trying to clarify terminology. But I’m not necessarily trying to propose a new definition – I’m saying that the existing definition already implies that malign priors are an inner alignment problem, rather than than an issue with outer alignment. Evan’s footnote requires the model to perform optimally on everything it actually encounters in the real world (rather than asking it to do as well as it can across the multiverse, given its training data); so that definition doesn’t have a problem with malign priors. And as Richard notes here, common usage of “inner alignment” refers to any case where the model performs well on the training data but is misaligned during deployment, which definitely includes problems with malign priors. And per Rohin’s comment on this post, apparently he already agrees that malign priors are an inner alignment problem.
Basically, the main point of the post is just that the 11 proposals post is wrong about mentioning malign priors as a problem with outer alignment. And then I attached 3 sections of musings that came up when trying to write that :)
Well, at this point I feel foolish for arguing about semantics. I appreciate your post, and don’t have a problem with saying that the malignity problem is an inner alignment problem. (That is zero evidence that it isn’t also an outer alignment problem though!)
Evan’s footnote-definition doesn’t rule out malign priors unless we assume that the real world isn’t a simulation. We may have good pragmatic reasons to act as if it isn’t, but I still think you are changing the definition of outer alignment if you think it assumes we aren’t in a simulation. But *shrug* if that’s what people want to do, then that’s fine I guess, and I’ll change my usage to conform with the majority.
Cool, seems reasonable. Here are some minor responses: (perhaps unwisely, given that we’re in a semantics labyrinth)
Evan’s footnote-definition doesn’t rule out malign priors unless we assume that the real world isn’t a simulation
Idk, if the real world is a simulation made by malign simulators, I wouldn’t say that an AI accurately predicting the world is falling prey to malign priors. I would probably want my AI to accurately predict the world I’m in even if it’s simulated. The simulators control everything that happens anyway, so if they want our AIs to behave in some particular way, they can always just make them do that no matter what we do.
you are changing the definition of outer alignment if you think it assumes we aren’t in a simulation
Fwiw, I think this is true for a definition that always assumes that we’re outside a simulation, but I think it’s in line with previous definitions to say that the AI should think we’re not in a simulation iff we’re not in a simulation. That’s just stipulating unrealistically competetent prediction. Another way to look at it is that in the limit of infinite in-distribution data, an AI may well never be able to tell whether we’re in the real world or in a simulation that’s identical to the real world; but they would be able to tell whether we’re in a simulation with simulators who actually intervene, because it would see them intervening somewhere in its infinite dataset. And that’s the type of simulators that we care about. So definitions of outer alignment that appeal to infinite data automatically assumes that AIs would be able to tell the difference between worlds that are functionally like the real world, and worlds with intervening simulators.
And then, yeah, in practice I agree we won’t be able to learn whether we’re in a simulation or not, because we can’t guarantee in-distribution data. So this is largely semantics. But I do think definitions like this end up being practically useful, because convincing the agent that it’s not individually being simulated is already an inner alignment issue, for malign-prior-reasons, and this is very similar.
This is unsatisfying to me. First you say that we can’t define optimum in the obvious way because then very few things would be outer aligned, then you say we should define optimum in such a way that the only way to be outer aligned is to assume you aren’t in a simulation. (How else would we get an AI that act’s like it’s not in a simulation, if it is in fact not in a simulation? You can’t tell whether you are in a simulation or not, by definition, so the only way for such an AI to exist is for it to always act like it’s not in a simulation, i.e. to assume.) An AI that assumes it isn’t in a simulation seems like a defective AI to me, so it’s weird to build that in to the definition of outer alignment.
It’s possible I’m misunderstanding you though!
Things I believe about what sort of AI we want to build:
It would be kind of convenient if we had an AI that could help us do acausal trade. If assuming that it’s not in a simulation would preclude an AI from doing acausal trade, that’s a bit inconvenient. However, I don’t think this matters for the discussion at hand, for reasons I describe in the final array of bullet points below.
Even if it did matter, I don’t think that the ability to do acausal trade is a deal-breaker. If we had a corrigible, aligned, superintelligent AI that couldn’t do acausal trade, we could ask it to scan our brains, then compete through any competitive period on Earth / in space, and eventually recreate us and give us enough time to figure out this acausal trade thing ourselves. Thus, for practical purposes, an AI that assumes it isn’t in a simulation doesn’t seem defective to me, even if that means it can’t do acausal trade.
Things I believe about how to choose definitions:
When choosing how to define our terms, we should choose based on what abstractions are most useful for the task at hand. For the outer-alignment-at-optimum vs inner alignment distinction, we’re trying to choose a definition of “optimal performance” such that we can separately:
Design an intent-aligned AI out of idealised training procedures that always yield “optimal performance” on some metric. If we successfully do this, we’ve solved outer alignment.
Figure out a training procedure that produces an AI that actually does very well on the chosen metric (sufficiently well to be aligned, even if it doesn’t achieve absolute optimal performance). If we do this, we’ve solved inner alignment.
Things I believe about what these candidate definitions would imply:
For every AI-specification built with the abstraction “Given some finite training data D, the AI predicts the next data point X according to how common it is that X follows D across the multiverse”, I think that AI is going to be misaligned (unless it’s trained with data that we can’t get our hands on, e.g. infinite in-distribution data), because of the standard universal-prior-is-misaligned-reasons. I think this holds true even if we’re trying to predict humans like in IDA. Thus, this definition of “optimal performance” doesn’t seem useful at all.
For AI-specification built with the abstraction “Given some finite training data D, the AI predicts the next data point X according to how common it is that X follows D on Earth if we aren’t in a simulation”, I think it probably is possible to build aligned AIs. Since it also doesn’t seem impossible to train AIs to do something like this (ie we haven’t just moved the impossibility to the inner alignment part of the problem), it seems like a pretty good definition of “optimal performance”.
Surprisingly, I think it’s even possible to build AIs that do assign some probability to being in a simulation out of this. E.g. we could train the AI via imitation learning to imitate me (Lukas). I assign a decent probability to being in a simulation, so a perfect Lukas-imitator would also assign a decent probability to being in a simulation. This is true even if the Lukas-imitator is just trying to imitate the real-world Lukas as opposed to the simulated Lukas, because real-world Lukas assigns some probability to being simulated, in his ignorance.
I’m also open to other definitions of “optimal performance”. I just don’t know any useful ones other than the ones I mention in the post.
Thanks, this is helpful.
--You might be right that an AI which assumes it isn’t in a simulation is OK—but I think it’s too early to conclude that yet. We should think more about acausal trade before concluding it’s something we can safely ignore, even temporarily. There’s a good general heuristic of “Don’t make your AI assume things which you think might not be true” and I don’t think we have enough reason to violate it yet.
--You say
Isn’t that exactly the point of the universal prior is misaligned argument? The whole point of the argument is that this abstraction/specification (and related ones) is dangerous. So… I guess your title made it sound like you were teaching us something new about prediction (as in, prediction can be outer aligned at optimum) when really you are just arguing that we should change the definition of outer-aligned-at-optimum, and your argument is that the current definition makes outer alignment too hard to achieve? If this is a fair summary of what you are doing, then I retract my objections I guess, and reflect more.
Yup.
I mean, it’s true that I’m mostly just trying to clarify terminology. But I’m not necessarily trying to propose a new definition – I’m saying that the existing definition already implies that malign priors are an inner alignment problem, rather than than an issue with outer alignment. Evan’s footnote requires the model to perform optimally on everything it actually encounters in the real world (rather than asking it to do as well as it can across the multiverse, given its training data); so that definition doesn’t have a problem with malign priors. And as Richard notes here, common usage of “inner alignment” refers to any case where the model performs well on the training data but is misaligned during deployment, which definitely includes problems with malign priors. And per Rohin’s comment on this post, apparently he already agrees that malign priors are an inner alignment problem.
Basically, the main point of the post is just that the 11 proposals post is wrong about mentioning malign priors as a problem with outer alignment. And then I attached 3 sections of musings that came up when trying to write that :)
Well, at this point I feel foolish for arguing about semantics. I appreciate your post, and don’t have a problem with saying that the malignity problem is an inner alignment problem. (That is zero evidence that it isn’t also an outer alignment problem though!)
Evan’s footnote-definition doesn’t rule out malign priors unless we assume that the real world isn’t a simulation. We may have good pragmatic reasons to act as if it isn’t, but I still think you are changing the definition of outer alignment if you think it assumes we aren’t in a simulation. But *shrug* if that’s what people want to do, then that’s fine I guess, and I’ll change my usage to conform with the majority.
Cool, seems reasonable. Here are some minor responses: (perhaps unwisely, given that we’re in a semantics labyrinth)
Idk, if the real world is a simulation made by malign simulators, I wouldn’t say that an AI accurately predicting the world is falling prey to malign priors. I would probably want my AI to accurately predict the world I’m in even if it’s simulated. The simulators control everything that happens anyway, so if they want our AIs to behave in some particular way, they can always just make them do that no matter what we do.
Fwiw, I think this is true for a definition that always assumes that we’re outside a simulation, but I think it’s in line with previous definitions to say that the AI should think we’re not in a simulation iff we’re not in a simulation. That’s just stipulating unrealistically competetent prediction. Another way to look at it is that in the limit of infinite in-distribution data, an AI may well never be able to tell whether we’re in the real world or in a simulation that’s identical to the real world; but they would be able to tell whether we’re in a simulation with simulators who actually intervene, because it would see them intervening somewhere in its infinite dataset. And that’s the type of simulators that we care about. So definitions of outer alignment that appeal to infinite data automatically assumes that AIs would be able to tell the difference between worlds that are functionally like the real world, and worlds with intervening simulators.
And then, yeah, in practice I agree we won’t be able to learn whether we’re in a simulation or not, because we can’t guarantee in-distribution data. So this is largely semantics. But I do think definitions like this end up being practically useful, because convincing the agent that it’s not individually being simulated is already an inner alignment issue, for malign-prior-reasons, and this is very similar.