I’m trying to understand this debate, and probably failing.
>human concepts cannot be faithfully and robustly translated into the system’s internal ontology at all.
I assume we all agree that the system can understand the human ontology, though? This is at least necessary for communicating and reasoning about humans, which LLMs can clearly already do to some extent.
There’s a lot of work around mapping ontologies, and this is known to be difficult, but very possible—especially for a superhuman intelligence.
So, I fail to see what exactly the problem is. If this smarter system can understand and reason about human ways of thinking about the world, I assume it could optimize for these ways if it wanted to. I assume the main question is if it wants to—but I fail to understand how this is an issue of ontology.
If a system really couldn’t reason about human ontologies, then I don’t see how it would understand the human world at all.
I’d appreciate any posts that clarify this question.
This would probably need a whole additional post to answer fully, but I can kinda gesture briefly in the right direction.
Let’s use a standard toy model: an AI which models our whole world using quantum fields directly. Does this thing “understand the human ontology”? Well, the human ontology is embedded in its model in some sense (since there are quantum-level simulations of humans embedded in its model), but the AI doesn’t actually factor any of its cognition through the human ontology. So if we want to e.g. translate some human instructions or human goals or some such into that AI’s ontology, we need a full quantum-level specification of the instructions/goals/whatever.
Now, presumably we don’t actually expect a strong AI to simulate the whole world at the level of quantum fields, but that example at least shows what it could look like for an AI to be highly capable, including able to reason about and interact with humans, but not use the human ontology at all.
I assume that this AI agent would be able to have conversations with humans about our ontologies. I strongly assume it would need to be able to do the work of “thinking through our eyes/ontologies” to do this.
I’d imagine the situation would be something like, 1. The agent uses quantum-simutions almost all of the time. 2. In the case it needs to answer human questions, like answer AP Physics problems, it easily understands how to make these human-used models/ontologies in order to do so.
Similar to how graduate physicists can still do mechanics questions without considering special relativity or quantum effects, if asked.
So I’d assume that the agent/AI could do the work of translation—we wouldn’t need to.
I guess, here are some claims: 1) Humans would have trouble policing a being way smarter than us. 2) Humans would have trouble understanding AIs with much more complex ontologies. 3) AIs with more complex ontologies would have trouble understanding humans.
#3 seems the most suspect to me, though 1 and 2 also seem questionable.
I strongly assume it would need to be able to do the work of “thinking through our eyes/ontologies” to do this.
Why would an AI need to do that? It can just simulate what happens conditional on different sounds coming from its speaker or whatever, and then emit the sounds which result in the outcomes which it wants.
A human ontology is not obviously the best tool, even for e.g. answering mostly-natural-language questions on an exam. Heck, even today’s exam help services will often tell you to guess which answer the graders will actually mark as correct, rather than taking questions literally or whatever. Taken to the extreme, an exam-acing AI would plausibly perform better by thinking about the behavior of the physical system which is a human grader (or a human recording the “correct answers” for an automated grader to use), rather than trying to reason directly about the semantics of the natural language as a human would interpret it.
(To be clear, my median model does not disagree with you here, but I’m playing devil’s advocate.)
I think that raises more questions than it answers, naturally. (“Okay, can an agent so capable that they can easily make a quantum-simulation to answer tests, really not find some way of effectively understanding human ontologies for decision-making?”), but it seems like this is more for Eliezer, and also, that might be part of a longer post.
Okay, can an agent so capable that they can easily make a quantum-simulation to answer tests, really not find some way of effectively understanding human ontologies for decision-making?
Could it? Maybe. But why would it? What objective, either as the agent’s internal goal or as an outer optimization signal, would incentivize the agent to bother using a human ontology at all, when it could instead use the predictively-superior quantum simulator? Like, any objective ultimately grounds out in some physical outcome or signal, and the quantum simulator is just better for predicting which actions have which effects on that physical outcome/signal.
If it’s able to function as well as it would if it understands our ontology, if not better, then why does it then matter if it doesn’t use our ontology?
I assume a system you’re describing could still be used by humans to do (basically) all of the important things. Like, we could ask it “optimize this company, in a way that we would accept, after a ton of deliberation”, and it could produce a satisfying response.
> But why would it? What objective, either as the agent’s internal goal or as an outer optimization signal, would incentivize the agent to bother using a human ontology at all, when it could instead use the predictively-superior quantum simulator?
I mean, if it can always act just as well as if it could understand human ontologies, then I don’t see the benefit of it “technically understanding human ontologies”. This seems like it is tending into some semantic argument or something.
If an agent can trivially act as if it understands Ontology X, where/why does it actually matter that it doesn’t technically “understand” ontology X?
I assume that the argument that “this distinction matters a lot” would functionally play out in there being some concrete things that it can’t do.
Bear in mind that the goal itself, as understood by the AI, is expressed in the AI’s ontology. The AI is “able to function as well as it would if it understands our ontology, if not better”, but that “as well if not better” is with respect to the goal as understood by the AI, not the goal as understood by the humans.
Like, you ask the AI “optimize this company, in a way that we would accept, after a ton of deliberation”, and it has a very-different-off-distribution notion than you about what constitutes the “company”, and counts as you “accepting”, and what it’s even optimizing the company for.
… and then we get to the part about the AI producing “a satisfying response”, and that’s where my deltas from Christiano will be more relevant.
(feel free to stop replying at any point, sorry if this is annoying)
> Like, you ask the AI “optimize this company, in a way that we would accept, after a ton of deliberation”, and it has a very-different-off-distribution notion than you about what constitutes the “company”, and counts as you “accepting”, and what it’s even optimizing the company for.
I’d assume that when we tell it, “optimize this company, in a way that we would accept, after a ton of deliberation”, this could be instead described as, “optimize this company, in a way that we would accept, after a ton of deliberation, where these terms are described using our ontology”
It seems like the AI can trivially figure out what humans would regard as the “company” or “accepting”. Like, it could generate any question like, “Would X qualify as the ’company, if asked to a human?”, and get an accurate response.
I agree that we would have a tough time understanding its goal / specifications, but I expect that it would be capable of answering questions about its goal in our ontology.
I’d assume that when we tell it, “optimize this company, in a way that we would accept, after a ton of deliberation”, this could be instead described as, “optimize this company, in a way that we would accept, after a ton of deliberation, where these terms are described using our ontology”
The problem shows up when the system finds itself acting in a regime where the notion of us (humans) “accepting” its optimizations becomes purely counterfactual, because no actual human is available to oversee its actions in that regime. Then the question of “would a human accept this outcome?” must ground itself somewhere in the system’s internal model of what those terms refer to, which (by hypothesis) need not remotely match their meanings in our native ontology.
This isn’t (as much of) a problem in regimes where an actual human overseer is present (setting aside concerns about actual human judgement being hackable because we don’t implement our idealized values, i.e. outer alignment), because there the system’s notion of ground truth actually is grounded by the validation of that overseer.
You can have a system that models the world using quantum field theory, task it with predicting the energetic fluctuations produced by a particular set of amplitude spikes corresponding to a human in our ontology, and it can perfectly well predict whether those fluctuations encode sounds or motor actions we’d interpret as indications of approval of disapproval—and as long as there’s an actual human there to be predicted, the system will do so without issue (again modulo outer alignment concerns).
But remove the human, and suddenly the system is no longer operating based on its predictions of the behavior of a real physical system, and is instead operating from some learned counterfactual representation consisting of proxies in its native QFT-style ontology which happened to coincide with the actual human’s behavior while the human was present. And that learned representation, in an ontology as alien as QFT, is (assuming the falsehood of the natural abstraction hypothesis) not going to look very much like the human we want it to look like.
I’m confused about what it means to “remove the human”, and why it’s so important whether the human is ‘removed’. Maybe if I try to nail down more parameters of the hypothetical, that will help with my confusion. For the sake of argument, can I assume...
That the AI is running computations involving quantum fields because it found that was the most effective way to make e.g. next-token predictions on its training set?
That the AI is in principle capable of running computations involving quantum fields to represent a genius philosopher?
If I can assume that stuff, then it feels like a fairly core task, abundantly stress-tested during training, to read off the genius philosopher’s spoken opinions about e.g. moral philosophy from the quantum fields. How else could quantum fields be useful for next-token predictions?
Another probe: Is alignment supposed to be hard in this hypothetical because the AI can’t represent human values in principle? Or is it supposed to be hard because it also has a lot of unsatisfactory representations of human values, and there’s no good method for finding a satisfactory needle in the unsatisfactory haystack? Or some other reason?
But remove the human, and suddenly the system is no longer operating based on its predictions of the behavior of a real physical system, and is instead operating from some learned counterfactual representation consisting of proxies in its native QFT-style ontology which happened to coincide with the actual human’s behavior while the human was present.
This sounds a lot like saying “it might fail to generalize”. Supposing we make a lot of progress on out-of-distribution generalization, is alignment getting any easier according to you? Wouldn’t that imply our systems are getting better at choosing proxies which generalize even when the human isn’t ‘present’?
I’m confused about what it means to “remove the human”, and why it’s so important whether the human is ‘removed’.
Because the human isn’t going to constantly be present for everything the system does after it’s deployed (unless for some reason it’s not deployed).
If I can assume that stuff, then it feels like a fairly core task, abundantly stress-tested during training, to read off the genius philosopher’s spoken opinions about e.g. moral philosophy from the quantum fields. How else could quantum fields be useful for next-token predictions?
Quantum fields are useful for an endless variety of things, from modeling genius philosophers to predicting lottery numbers. If your next-token prediction task involves any physically instantiated system, a model that uses QFT will be able to predict that system’s time-evolution with alacrity.
(Yes, this is computationally intractable, but we’re already in full-on hypothetical land with the QFT-based model to begin with. Remember, this is an exercise in showing what happens in the worst-case scenario for alignment, where the model’s native ontology completely diverges from our own.)
So we need not assume that predicting “the genius philosopher” is a core task. It’s enough to assume that the model is capable of it, among other things—which a QFT-based model certainly would be. Which, not so coincidentally, brings us to your next question:
Is alignment supposed to be hard in this hypothetical because the AI can’t represent human values in principle? Or is it supposed to be hard because it also has a lot of unsatisfactory representations of human values, and there’s no good method for finding a satisfactory needle in the unsatisfactory haystack? Or some other reason?
Consider how, during training, the human overseer (or genius philosopher, if you prefer) would have been pointed out to the model. We don’t have reliable access to its internal world-model, and even if we did we’d see blobs of amplitude and not much else. There’s no means, in that setting, of picking out the human and telling the model to unambiguously defer to that human.
What must happen instead, then, is something like next-token prediction: we perform gradient descent (or some other optimization method; it doesn’t really matter for the purposes of our story) on the model’s outputs, rewarding it when its outputs happen to match those of the human. The hope is that this will lead, in the limit, to the matching no longer occurring by happenstance—that if we train for long enough and in a varied enough set of situations, the best way for the model to produce outputs that track those of the human is to model that human, even in its QFT ontology.
But do we know for a fact that this will be the case? Even if it is, what happens when the overseer isn’t present to provide their actual feedback, as was never the case during training? What becomes the model’s referent then? We’d like to deploy it without an overseer, or in situations too complex for an overseer to understand. And whether the model’s behavior in those situations conforms to what the overseer would want, ideally, depends on what kinds of behind-the-scenes extrapolation the model is doing—which, if the model’s native ontology is something in which “human philosophers” are not basic objects, is liable to look very weird indeed.
This sounds a lot like saying “it might fail to generalize”.
Sort of, yes—but I’d call it “malgeneralization” rather than “misgeneralization”. It’s not failing to generalize, it’s just not generalizing the way you’d want it to.
Supposing we make a lot of progress on out-of-distribution generalization, is alignment getting any easier according to you? Wouldn’t that imply our systems are getting better at choosing proxies which generalize even when the human isn’t ‘present’?
Depends on what you mean by “progress”, and “out-of-distribution”. A powerful QFT-based model can make perfectly accurate predictions in any scenario you care to put it in, so it’s not like you’ll observe it getting things wrong. What experiments, and experimental outcomes, are you imagining here, such that those outcomes would provide evidence of “progress on out-of-distribution generalization”, when fundamentally the issue is expected to arise in situations where the experimenters are themselves absent (and which—crucially—is not a condition you can replicate as part of an experimental setup)?
Because the human isn’t going to constantly be present for everything the system does after it’s deployed (unless for some reason it’s not deployed).
I think it ought to be possible for someone to always be present. [I’m also not sure it would be necessary.]
So we need not assume that predicting “the genius philosopher” is a core task.
It’s not the genius philosopher that’s the core task, it’s the reading of their opinions out of a QFT-based simulation of them. As I understand this thought experiment, we’re doing next-token prediction on e.g. a book written by a philosopher, and in order to predict the next token using QFT, the obvious method is to use QFT to simulate the philosopher. But that’s not quite enough—you also need to read the next token out of that QFT-based simulation if you actually want to predict it. This sort of ‘reading tokens out of a QFT simulation’ thing would be very common, thus something the system gets good at in order to succeed at next-token prediction.
I think perhaps there’s more to your thought experiment than just alien abstractions, and it’s worth disentangling these assumptions. For one thing, in a standard train/dev/test setup, the model is arguably not really doing prediction, it’s doing retrodiction. It’s making ‘predictions’ about things which already happened in the past. The final model is chosen based on what retrodicts the data the best. Also, usually the data is IID rather than sequential—there’s no time component to the data points (unless it’s a time-series problem, which it usually isn’t). The fact that we’re choosing a model which retrodicts well is why the presence/absence of a human is generally assumed to be irrelevant, and emphasizing this factor sounds wacky to my ML engineer ears.
So basically I suspect what you’re really trying to claim here, which incidentally I’ve also seen John allude to elsewhere, is that the standard assumptions of machine learning involving retrodiction and IID data points may break down once your system gets smart enough. This is a possibility worth exploring, I just want to clarify that it seems orthogonal to the issue of alien abstractions. In principle one can imagine a system that heavily features QFT in its internal ontology yet still can be characterized as retrodicting on IID data, or a system with vanilla abstractions that can’t be characterized as retrodicting on IID data. I think exploring this in a post could be valuable, because it seems like an under-discussed source of disagreement between certain doomer-type people and mainstream ML folks.
I think it ought to be possible for someone to always be present. [I’m also not sure it would be necessary.]
I think I don’t understand what you’re imagining here. Are you imagining a human manually overseeing all outputs of something like ChatGPT, or Microsoft Copilot, before those outputs are sent to the end user (or, worse yet, put directly into production)?
[I also think I don’t understand why you make the bracketed claim you do, but perhaps hashing that out isn’t a conversational priority.]
As I understand this thought experiment, we’re doing next-token prediction on e.g. a book written by a philosopher, and in order to predict the next token using QFT, the obvious method is to use QFT to simulate the philosopher. But that’s not quite enough—you also need to read the next token out of that QFT-based simulation if you actually want to predict it.
It sounds like your understanding of the thought experiment differs from mine. If I were to guess, I’d guess that by “you” you’re referring to someone or something outside of the model, who has access to the model’s internals, and who uses that access to, as you say, “read” the next token out of the model’s ontology. However, this is not the setup we’re in with respect to actual models (with the exception perhaps of some fairly limited experiments in mechanistic interpretability)—and it’s also not the setup of the thought experiment, which (after all) is about precisely what happens when you can’t read things out of the model’s internal ontology, because it’s too alien to be interpreted.
In other words: “you” don’t read the next token out of the QFT simulation. The model is responsible for doing that translation work. How do we get it to do that, even though we don’t know how to specify the nature of the translation work, much less do it ourselves? Well, simple: in cases where we have access to the ground truth of the next token, e.g. because we’re having it predict an existing book passage, we simply penalize it whenever its output fails to match the next token in the book. In this way, the model can be incentivized to correctly predict whatever we want it to predict, even if we wouldn’t know how to tell it explicitly to do whatever it’s doing.
(The nature of this relationship—whereby humans train opaque algorithms to do things they wouldn’t themselves be able to write out as pseudocode—is arguably the essence of modern deep learning in toto.)
For one thing, in a standard train/dev/test setup, the model is arguably not really doing prediction, it’s doing retrodiction. It’s making ‘predictions’ about things which already happened in the past. The final model is chosen based on what retrodicts the data the best.
Yes, this is a reasonable description to my eyes. Moreover, I actually think it maps fairly well to the above description of how a QFT-style model might be trained to predict the next token of some body of text; in your terms, this is possible specifically because the text already exists, and retrodictions of that text can be graded based on how well they compare against the ground truth.
Also, usually the data is IID rather than sequential—there’s no time component to the data points (unless it’s a time-series problem, which it usually isn’t).
This, on the other hand, doesn’t sound right to me. Yes, there are certainly applications where the training regime produces IID data, but next-token prediction is pretty clearly not one of those? Later tokens are highly conditionally dependent on previous tokens, in a way that’s much closer to a time series than to some kind of IID process. Possibly part of the disconnect is that we’re imagining different applications entirely—which might also explain our differing intuitions w.r.t. deployment?
The fact that we’re choosing a model which retrodicts well is why the presence/absence of a human is generally assumed to be irrelevant, and emphasizing this factor sounds wacky to my ML engineer ears.
Right, so just to check that we’re on the same page: do we agree that after a (retrodictively trained) model is deployed for some use case other than retrodicting existing data—for generative use, say, or for use in some kind of online RL setup—then it’ll doing something other than retrodicting? And that in that situation, the source of (retrodictable) ground truth that was present during training—whether that was a book, a philosopher, or something else—will be absent?
If we do actually agree about that, then that distinction is really all I’m referring to! You can think of it as training set versus test set, to use a more standard ML analogy, except in this case the “test set” isn’t labeled at all, because no one labeled it in advance, and also it’s coming in from an unpredictable outside world rather than from a folder on someone’s hard drive.
Why does that matter? Well, because then we’re essentially at the mercy of the model’s generalization properties, in a way we weren’t while it was retrodicting the training set (or even the validation set, if one of those existed). If it gets anything wrong, there’s no longer any training signal or gradient to penalize it for being “wrong”—so the only remaining question is, just how likely is it to be “wrong”, after being trained for however long it was trained?
And that’s where the QFT model comes in. It says, actually, even if you train me for a good long while on a good amount of data, there are lots of ways for me to generalize “wrongly” from your perspective, if I’m modeling the universe at the level of quantum fields. Sure, I got all the retrodictions right while there was something to be retrodicted, but what exactly makes you think I did that by modeling the philosopher whose remarks I was being trained on?
Maybe I was predicting the soundwaves passing through a particularly region of air in the room he was located—or perhaps I was predicting the pattern of physical transistors in the segment of memory of a particular computer containing his works. Those physical locations in spacetime still exist, and now that I’m deployed, I continue to make predictions using those as my referent—except, the encodings I’m predicting there no longer resemble anything like coherent moral philosophy, or coherent anything, really.
The philosopher has left the room, or the computer’s memory has been reconfigured—so what exactly are the criteria by which I’m supposed to act now? Well, they’re going to be something, presumably—but they’re not going to be something explicit. They’re going to be something implicit to my QFT ontology, something that—back when the philosopher was there, during training—worked in tandem with the specifics of his presence, and the setup involving him, to produce accurate retrodictions of his judgements on various matters.
Now that that’s no longer the case, those same criteria describe some mathematical function that bears no meaningful correspondence to anything a human would recognize, valuable or not—but the function exists, and it can be maximized. Not much can be said about what maximizing that function might result in, except that it’s unlikely to look anything like “doing right according to the philosopher”.
That’s why the QFT example is important. A more plausible model, one that doesn’t think natively in terms of quantum amplitudes, permits the possibility of correctly compressing what we want it to compress—of learning to retrodict, not some strange physical correlates of the philosopher’s various motor outputs, but the actual philosopher’s beliefs as we would understand them. Whether that happens, or whether a QFT-style outcome happens instead, depends in large part on the inductive biases of the model’s architecture and the training process—inductive biases on which the natural abstraction hypothesis asserts a possible constraint.
If I were to guess, I’d guess that by “you” you’re referring to someone or something outside of the model, who has access to the model’s internals, and who uses that access to, as you say, “read” the next token out of the model’s ontology.
Was using a metaphorical “you”. Probably should’ve said something like “gradient descent will find a way to read the next token out of the QFT-based simulation”.
Yes, there are certainly applications where the training regime produces IID data, but next-token prediction is pretty clearly not one of those?
I suppose I should’ve said various documents are IID to be more clear. I would certainly guess they are.
Right, so just to check that we’re on the same page: do we agree that after a (retrodictively trained) model is deployed for some use case other than retrodicting existing data—for generative use, say, or for use in some kind of online RL setup—then it’ll doing something other than retrodicting?
Generally speaking, yes.
And that’s where the QFT model comes in. It says, actually, even if you train me for a good long while on a good amount of data, there are lots of ways for me to generalize “wrongly” from your perspective, if I’m modeling the universe at the level of quantum fields. Sure, I got all the retrodictions right while there was something to be retrodicted, but what exactly makes you think I did that by modeling the philosopher whose remarks I was being trained on?
Well, if we’re following standard ML best practices, we have a train set, a dev set, and a test set. The purpose of the dev set is to check and ensure that things are generalizing properly. If they aren’t generalizing properly, we tweak various hyperparameters of the model and retrain until they do generalize properly on the dev set. Then we do a final check on the test set to ensure we didn’t overfit the dev set. If you forgot or never learned this stuff, I highly recommend brushing up on it.
In principle we could construct a test set or dev set either before or after the model has been trained. It shouldn’t make a difference under normal circumstances. It sounds like maybe you’re discussing a scenario where the model has achieved a level of omniscience, and it does fine on data that was available during its training, because it’s able to read off of an omniscient world-model. But then it fails on data generated in the future, because the translation method for its omniscient world-model only works on artifacts that were present during training. Basically, the time at which the data was generated could constitute a hidden and unexpected source of distribution shift. Does that summarize the core concern?
(To be clear, this sort of acquired-omniscience is liable to sound kooky to many ML researchers. I think it’s worth stress-testing alignment proposals under these sort of extreme scenarios, but I’m not sure we should weight them heavily in terms of estimating our probability of success. In this particular scenario, the model’s performance would drop on data generated after training, and that would hurt the company’s bottom line, and they would have a strong financial incentive to fix it. So I don’t know if thinking about this is a comparative advantage for alignment researchers.)
BTW, the point about documents being IID was meant to indicate that there’s little incentive for the model to e.g. retrodict the coordinates of the server storing a particular document—the sort of data that could aid and incentivize omniscience to a greater degree.
In any case, I would argue that “accidental omniscience” characterizes the problem better than “alien abstractions”. As before, you can imagine an accidentally-omniscient model that uses vanilla abstractions, or a non-omniscient model that uses alien ones.
Well, if we’re following standard ML best practices, we have a train set, a dev set, and a test set. The purpose of the dev set is to check and ensure that things are generalizing properly. If they aren’t generalizing properly, we tweak various hyperparameters of the model and retrain until they do generalize properly on the dev set. Then we do a final check on the test set to ensure we didn’t overfit the dev set. If you forgot or never learned this stuff, I highly recommend brushing up on it.
(Just to be clear: yes, I know what training and test sets are, as well as dev sets/validation sets. You might notice I actually used the phrase “validation set” in my earlier reply to you, so it’s not a matter of guessing someone’s password—I’m quite familiar with these concepts, as someone who’s implemented ML models myself.)
Generally speaking, training, validation, and test datasets are all sourced the same way—in fact, sometimes they’re literally sourced from the same dataset, and the delineation between train/dev/test is introduced during training itself, by arbitrarily carving up the original dataset into smaller sets of appropriate size. This may capture the idea of “IID” you seem to appeal to elsewhere in your comment—that it’s possible to test the model’s generalization performance on some held-out subset of data from the same source(s) it was trained on.
In ML terms, what the thought experiment points to is a form of underlying distributional shift, one that isn’t (and can’t be) captured by “IID” validation or test datasets. The QFT model in particular highlights the extent to which your training process, however broad or inclusive from a parochial human standpoint, contains many incidental distributional correlates to your training signal which (1) exist in all of your data, including any you might hope to rely on to validate your model’s generalization performance, and (2) cease to correlate off-distribution, during deployment.
This can be caused by what you call “omniscience”, but it need not; there are other, more plausible distributional differences that might be picked up on by other kinds of models. But QFT is (as far as our current understanding of physics goes) very close to the base ontology of our universe, and so what is inferrable using QFT is naturally going to be very different from what is inferrable using some other (less powerful) ontology. QFT is a very powerful ontology!
If you want to call that “omniscience”, you can, although note that strictly speaking the model is still just working from inferences from training data. It’s just that, if you feed enough data to a model that can hold entire swaths of the physical universe inside of its metaphorical “head”, pretty soon hypotheses that involve the actual state of that universe will begin to outperform hypotheses that don’t, and which instead use some kind of lossy approximation of that state involving intermediary concepts like “intent”, “belief”, “agent”, “subjective state”, etc.
In principle we could construct a test set or dev set either before or after the model has been trained. It shouldn’t make a difference under normal circumstances. It sounds like maybe you’re discussing a scenario where the model has achieved a level of omniscience, and it does fine on data that was available during its training, because it’s able to read off of an omniscient world-model. But then it fails on data generated in the future, because the translation method for its omniscient world-model only works on artifacts that were present during training. Basically, the time at which the data was generated could constitute a hidden and unexpected source of distribution shift. Does that summarize the core concern?
You’re close; I’d say the concern is slightly worse than that. It’s that the “future data” never actually comes into existence, at any point. So the source of distributional shift isn’t just “the data is generated at the wrong time”, it’s “the data never gets externally generated to begin with, and you (the model) have to work with predictions of what the data counterfactually would have been, had it been generated”.
(This would be the case e.g. with any concept of “human approval” that came from a literal physical human or group of humans during training, and not after the system was deployed “in the wild”.)
In any case, I would argue that “accidental omniscience” characterizes the problem better than “alien abstractions”. As before, you can imagine an accidentally-omniscient model that uses vanilla abstractions, or a non-omniscient model that uses alien ones.
The problem is that “vanilla” abstractions are not the most predictively useful possible abstractions, if you’ve got access to better ones. And models whose ambient hypothesis space is broad enough to include better abstractions (from the standpoint of predictive accuracy) will gravitate towards those, as is incentivized by the outer form of the training task. QFT is the extreme example of a “better abstraction”, but in principle (if the natural abstraction hypothesis fails) there will be all sorts and shapes of abstractions, and some of them will be available to us, and some of them will be available to the model, and these sets will not fully overlap—which is a concern in worlds where different abstractions lead to different generalization properties.
QFT is the extreme example of a “better abstraction”, but in principle (if the natural abstraction hypothesis fails) there will be all sorts and shapes of abstractions, and some of them will be available to us, and some of them will be available to the model, and these sets will not fully overlap—which is a concern in worlds where different abstractions lead to different generalization properties.
Indeed. I think the key thing for me is, I expect the model to be strongly incentivized to have a solid translation layer from its internal ontology to e.g. English language, due to being trained on lots of English language data. Due to Occam’s Razor, I expect the internal ontology to be biased towards that of an English-language speaker.
It’s just that, if you feed enough data to a model that can hold entire swaths of the physical universe inside of its metaphorical “head”, pretty soon hypotheses that involve the actual state of that universe will begin to outperform hypotheses that don’t, and which instead use some kind of lossy approximation of that state involving intermediary concepts like “intent”, “belief”, “agent”, “subjective state”, etc.
I’m imagining something like: early in training the model makes use of those lossy approximations because they are a cheap/accessible way to improve its predictive accuracy. Later in training, assuming it’s being trained on the sort of gigantic scale that would allow it to hold swaths of the physical universe in its head, it loses those desired lossy abstractions due to catastrophic forgetting. Is that an OK way to operationalize your concern?
I’m still not convinced that this problem is a priority. It seems like a problem which will be encountered very late if ever, and will lead to ‘random’ failures on predicting future/counterfactual data in a way that’s fairly obvious.
Nitpicky edit request: your comment contains some typos that make it a bit hard to parse (“be other”, “we it”). (So apologies if my reaction misunderstands your point.)
[Assuming that the opposite of the natural abstraction hypothesis is true—ie, not just that “not all powerful AIs share ontology with us”, but actually “most powerful AIs don’t share ontology with us”:] I also expect that an AI with superior ontology would be able to answer your questions about its ontology, in a way that would make you feel like[1] you understand what is happening. But that isn’t the same as being able to control the AI’s actions, or being able to affect its goal specification in a predictable way (to you). You totally wouldn’t be able to do that.
([Vague intuition, needs work] I suspect that if you had a method for predictably-to-you translating from your ontology to the AI’s ontology, then this could be used to prove that you can easily find a powerful AI that shares an ontology with us. Because that AI could be basically thought of as using our ontology.)
Though note that unless you switched to some better ontology, you wouldn’t actually understand what is going on, because your ontology is so bogus that it doesn’t even make sense to talk about “you understanding [stuff]”. This might not be true for all kinds of [stuff], though. EG, perhaps our understanding of set theory is fine while our understanding of agency, goals, physics, and whatever else, isn’t.
if it can quantum-simulate a human brain, then it can in principle decode things from it as well. the question is how to demand that it do so in the math that defines the system.
Why do you assume that we need to demand this be done in “the math that defines the system”?
I would assume we could have a discussion with this higher-ontology being to find a happy specification, using our ontologies, that it can tell us we’ll like, also using our ontologies.
A 5-year-old might not understand an adult’s specific definition of “heavy”, but it’s not too hard for it to ask for a heavy thing.
I don’t at all think that’s off the table temporarily! I don’t trust that it’ll stay on the table—if the adult has malicious intent, knowing what the child means isn’t enough; it seems hard to know when it’ll stop being viable without more progress. (for example, I doubt it’ll ever be a good idea to do that with an OpenAI model, they seem highly deceptively misaligned to most of their users. seems possible for it to be a good idea with Claude.) But the challenge is how to certify that the math does in fact say the right thing to durably point to the ontology in which we want to preserve good things; at some point we have to actually understand some sort of specification that constrains what the stuff we don’t understand is doing to be what it seems to say in natural language.
I think this quantum fields example is perhaps not all that forceful, because in your OP you state
maybe a faithful and robust translation would be so long in the system’s “internal language” that the translation wouldn’t fit in the system
However, it sounds like you’re describing a system where we represent humans using quantum fields as a routine matter, so fitting the translation into the system isn’t sounding like a huge problem? Like, if I want to know the answer to some moral dilemma, I can simulate my favorite philosopher at the level of quantum fields in order to hear what they would say if they were asked about the dilemma. Sounds like it could be just as good as an em, where alignment is concerned.
It’s hard for me to imagine a world where developing representations that allow you to make good next-token predictions etc. doesn’t also develop representations that can somehow be useful for alignment. Would be interested to hear fleshed-out counterexamples.
I assume we all agree that the system can understand the human ontology, though? This is at least necessary for communicating and reasoning about humans, which LLMs can clearly already do to some extent.
Can we reason about a thermostat’s ontology? Only sort of. We can say things like “The thermostat represents the local temperature. It wants that temperature to be the same as the set point.” But the thermostat itself is only very loosely approximating that kind of behavior—imputing any sort of generalizability to it that it doesn’t actually have is an anthropmorphic fiction. And it’s blatantly a fiction, because there’s more than one way to do it—you can suppose the thermostat wants only the temperature sensor to be at the right temperature vs. it wants the whole room vs. the whole world to be at that temperature, or that it’s “changing its mind” when it breaks vs. it would want to be repaired, etc.
To the superintelligent AI, we are the thermostat. You cannot be aligned to humans purely by being smart, because finding “the human ontology” is an act of interpretation, of story-telling, not just a question of fact. Helping an AI narrow down how to interpret humans as moral patients requires giving it extra assumptions or meta-level processes. (Or as I might call it, “solving the alignment problem.”)
How can this be, if a smart AI can talk to humans intelligibly and predict their behavior and so forth, even without specifying any of my “extra assumptions”? Well, how can we interact with a thermostat in a way that it can “understand,” even without fixing any particular story about its desires? We understand how it works in our own way, and we take actions using our own understanding. Often our interactions fall in the domain of the normal functioning of the thermostat, under which several different possible stories about “what the thermostat wants” apply, and sometimes we think about such stories but mostly we don’t bother.
Your thermostat example seems to rather highlight a disanalogy: The concept of a goal doesn’t apply to the thermostat because there is apparently no fact of the matter about which counterfactual situations would satisfy such a “goal”. I think part of the reason is that the concept of a goal requires the ability to apply it to counterfactual situations. But for humans there is such a fact of the matter; there are things that would be incompatible with or required by our goals. Even though some/many other things may be neutral (neither incompatible nor necessary).
So I don’t think there are any “extra assumptions” needed. In fact, even if there were such extra assumptions, it’s hard to see how they could be relevant. (This is analogous to the ancient philosophical argument that God declaring murder to be good obviously wouldn’t make it good, so God declaring murder to be bad must be irrelevant to murder being bad.)
Pick a goal, and it’s easy to say what’s required. But pick a human, and it’s not easy to say what their goal is.
Is my goal to survive? And yet I take plenty of risky actions like driving that trade that off against other things. And even worse, I deliberately undergo some transformative experiences (e.g. moving to a different city and making a bunch of new friends) that in some sense “make me a different person.” And even worse, sometimes I’m irrational or make mistakes, but under different interpretations of my behavior different things are irrational. If you interpret me as really wanting to survive, driving is an irrational thing I do because it’s common in my culture and I don’t have a good intuitive feel for statistics. If you interpret me a different way, maybe my intuitive feeling gets interpreted as more rational but my goal changes from survival to something more complicated.
More complicated yes, but I assume the question is whether superintelligent AIs can understand what you want “overall” at least as good as other humans. And here, I would agree with ozziegooen, the answer seems to be yes—even if they otherwise tend to reason about things differently than we do. Because there seems to be a fact of the matter about what you want overall, even if it is not easy to predict. But predicting it is not obviously inhibited by a tendency to think in different terms (“ontology”). Is the worry perhaps that the AI finds the concept of “what the human wants overall” unnatural, so is unlikely to optimize for it?
If there was no fact of the matter of what you want overall, there would be no fact of the matter of whether an AI is aligned with you or not. Which would mean there is no alignment problem.
The referenced post seems to apply specifically to IRL, which is purely based on behaviorism and doesn’t take information about the nature of the agent into account. (E.g. the fact that humans evolved from natural selection tells us a lot of what they probably want, and information about their brain could tell us how intelligent they are.) It’s also only an epistemic point about the problem of externally inferring values, not about those values not existing.
See my sequence “Reducing Goodhart” for what I (or me from a few years ago) think the impact is on the alignment problem.
the fact that humans evolved from natural selection tells us a lot of what they probably want,
Sure. But only if you already know what evolved creatures tend to want. I.e. once you have already made interpretive choices in one case, you can get some information on how well they hang together with other cases.
Simplifying somewhat: I think that my biggest delta with John is that I don’t think the natural abstraction hypothesis holds. (EG, if I believed it holds, I would become more optimistic about single-agent alignment, to the point of viewing Moloch as higher priority.) At the same time, I believe that powerful AIs will be able to understand humans just fine. My vague attempt at reconciling these two is something like this:
Humans have some ontology, in which they think about the world. This corresponds to a world model. This world model has a certain amount of prediction errors.
The powerful AI wants to have much lower prediction error than that. When I say “natural abstraction hypothesis is false”, I imagine something like: If you want to have a much lower prediction error than that, you have to use a different ontology / world-model than humans use. And in fact if you want sufficiently low error, then all ontologies that can achieve that are very different from our ontology—either (reasonably) simple and different, or very complex (and, I guess, therefore also different).
So when the AI “understands humans perfectly well”, that means something like: The AI can visualise the flawed (ie, high prediction error) model that we use to think about the world. And it does this accurately. But it also sees how the model is completely wrong, and how the things, that we say we want, only make sense in that model that has very little to do with the actual world.
(An example would be how a four-year old might think about the world in terms of Good people and Evil people. The government sometimes does Bad things because there are many Evil people in it. And then the solution is to replace all the Evil people by Good people. And that might internally make sense, and maybe an adult can understand this way of thinking, while also being like “this has nothing to do with how the world actually works; if you want to be serious about anything, just throw this model out”.)
So when the AI “understands humans perfectly well”, that means something like: The AI can visualise the flawed (ie, high prediction error) model that we use to think about the world. And it does this accurately. But it also sees how the model is completely wrong, and how the things, that we say we want, only make sense in that model that has very little to do with the actual world.
This sounds a lot like a good/preferable thing to me. I would assume that we’d generally want AIs with ideal / superior ontologies.
It’s not clear to me why you’d think such a scenario would make us less optimistic about single-agent alignment. (If I’m understanding correctly)
As a quick reaction, let me just note that I agree that (all else being equal) this (ie, “the AI understanding us & having superior ontology”) seems desirable. And also that my comment above did not present any argument about why we should be pessimistic about AI X-risk if we believe that the natural abstraction hypothesis is false. (I was just trying to explain why/how “the AI has a different ontology” is compatible with “the AI understands our ontology”.)
As a longer reaction: I think my primary reason for pessimism, if natural abstraction hypothetis is false, is that a bunch of existing proposals might work if the hypothesis were true, but don’t work if the hypothesis is false. (EG, if the hypothesis is true, I can imagine that “do a lot of RLHF, and then ramp up the AIs intelligence” could just work. Similarly for “just train the AI to not be deceptive”.)
If I had to gesture at an underlying principle, then perhaps it could be something like: Suppose we successfully code up an AI which is pretty good at optimising, or create a process which gives rise to such an AI. [Inference step missing here.] Then the goals and planning of this AI will be happening in some ontology which allows for low prediction error. But this will be completely alien to our ontology. [Inference step missing here.] And, therefore, things that score very highly with respect to these (“alien”) goals will have roughly no value[1] according to our preferences. (I am not quite clear on this, but I think that if this paragraph was false, then you could come up with a way of falsifying my earlier description of how it looks like when the natural abstraction hypothesis is false.)
EG, if the hypothesis is true, I can imagine that “do a lot of RLHF, and then ramp up the AIs intelligence” could just work. Similarly for “just train the AI to not be deceptive”.)
Thanks, this makes sense to me.
Yea, I guess I’m unsure about that ‘[Inference step missing here.]’. My guess is that such system would be able to recognize situations where things that score highly with respect to its ontology, would score lowly, or would be likely to score lowly, using a human ontology. Like, it would be able to simulate a human deliberating on this for a very long time and coming to some conclusion.
I imagine that the cases where this would be scary are some narrow ones (though perhaps likely ones) where the system is both dramatically intelligent in specific ways, but incredibly inept in others. This ineptness isn’t severe enough to stop it from taking over the world, but it is enough to stop it from being at all able to maximize goals—and it also doesn’t take basic risk measures like “just keep a bunch of humans around and chat to them a whole lot, when curious”, or “try to first make a better AI that doesn’t have these failures, before doing huge unilateralist actions” for some reason.
It’s very hard for me to imagine such an agent, but that doesn’t mean it’s not possible, or perhaps likely.
[I am confused about your response. I fully endorse your paragraph on “the AI with superior ontology would be able to predict how humans would react to things”. But then the follow-up, on when this would be scary, seems mostly irrelevant / wrong to me—meaning that I am missing some implicit assumptions, misunderstanding how you view this, etc. I will try react in a hopefully-helpful way, but I might be completely missing the mark here, in which case I apologise :).]
I think the problem is that there is a difference between: (1) AI which can predict how things score in human ontology; and (2) AI which has “select things that score high in human ontology” as part of its goal[1]. And then, in the worlds where natural abstraction hypothesis is false: Most AIs achieve (1) as a by-product of the instrumental sub-goal of having low prediction error / being selected by our training processes / being able to manipulate humans. But us successfully achieving (2) for a powerful AI would require the natural abstraction hypothesis[2].
And this leaves us two options. First, maybe we just have no write access to the AI’s utility function at all. (EG, my neighbour would be very happy if I gave him $10k, but he doesn’t have any way of making me (intrinsincally) desire doing that.) Second, we might have a write access to the AI’s utility function, but not in a way that will lead to predictable changes in goals or behaviour. (EG, if you give me full access to weights of an LLM, it’s not like I know how to use that to turn that LLM into an actually-helpful assistant.) (And both of these seem scary to me, because of the argument that “not-fully-aligned goal + extremely powerful optimisation ==> extinction”. Which I didn’t argue for here.)
More precisely: Damn, we need a better terminology here. The way I understand things, “natural abstraction hypothesis” is the claim that most AIs will converge to an ontology that is similar to ours. The negation of that is that a non-trivial portion of AIs will use an ontology that is different from ours. What I subscribe to is that “almost no powerful AIs will use an ontology that is similar to ours”. Let’s call that “strong negation” of the natural abstraction hypothesis. So achieving (2) would be a counterexample to this strong negation. Ironically, I believe the strong negation hypothesis because I expect that very powerful AIs will arrive at similar ways of modelling the world—and those are all different from how we model the world.
I assume we all agree that the system can understand the human ontology, though?
This, however likely, is not certain. A possible way for this assumption to fail is when a system allocates minimal cognitive capacity to its internal ontology and remaining power to selecting best actions; this may be a viable strategy if system’s world model is still enough descriptive but does not have extra space to represent human ontology fully.
I’m trying to understand this debate, and probably failing.
>human concepts cannot be faithfully and robustly translated into the system’s internal ontology at all.
I assume we all agree that the system can understand the human ontology, though? This is at least necessary for communicating and reasoning about humans, which LLMs can clearly already do to some extent.
There’s a lot of work around mapping ontologies, and this is known to be difficult, but very possible—especially for a superhuman intelligence.
So, I fail to see what exactly the problem is. If this smarter system can understand and reason about human ways of thinking about the world, I assume it could optimize for these ways if it wanted to. I assume the main question is if it wants to—but I fail to understand how this is an issue of ontology.
If a system really couldn’t reason about human ontologies, then I don’t see how it would understand the human world at all.
I’d appreciate any posts that clarify this question.
This would probably need a whole additional post to answer fully, but I can kinda gesture briefly in the right direction.
Let’s use a standard toy model: an AI which models our whole world using quantum fields directly. Does this thing “understand the human ontology”? Well, the human ontology is embedded in its model in some sense (since there are quantum-level simulations of humans embedded in its model), but the AI doesn’t actually factor any of its cognition through the human ontology. So if we want to e.g. translate some human instructions or human goals or some such into that AI’s ontology, we need a full quantum-level specification of the instructions/goals/whatever.
Now, presumably we don’t actually expect a strong AI to simulate the whole world at the level of quantum fields, but that example at least shows what it could look like for an AI to be highly capable, including able to reason about and interact with humans, but not use the human ontology at all.
Thanks for that, but I’m left just as confused.
I assume that this AI agent would be able to have conversations with humans about our ontologies. I strongly assume it would need to be able to do the work of “thinking through our eyes/ontologies” to do this.
I’d imagine the situation would be something like,
1. The agent uses quantum-simutions almost all of the time.
2. In the case it needs to answer human questions, like answer AP Physics problems, it easily understands how to make these human-used models/ontologies in order to do so.
Similar to how graduate physicists can still do mechanics questions without considering special relativity or quantum effects, if asked.
So I’d assume that the agent/AI could do the work of translation—we wouldn’t need to.
I guess, here are some claims:
1) Humans would have trouble policing a being way smarter than us.
2) Humans would have trouble understanding AIs with much more complex ontologies.
3) AIs with more complex ontologies would have trouble understanding humans.
#3 seems the most suspect to me, though 1 and 2 also seem questionable.
Why would an AI need to do that? It can just simulate what happens conditional on different sounds coming from its speaker or whatever, and then emit the sounds which result in the outcomes which it wants.
A human ontology is not obviously the best tool, even for e.g. answering mostly-natural-language questions on an exam. Heck, even today’s exam help services will often tell you to guess which answer the graders will actually mark as correct, rather than taking questions literally or whatever. Taken to the extreme, an exam-acing AI would plausibly perform better by thinking about the behavior of the physical system which is a human grader (or a human recording the “correct answers” for an automated grader to use), rather than trying to reason directly about the semantics of the natural language as a human would interpret it.
(To be clear, my median model does not disagree with you here, but I’m playing devil’s advocate.)
Thanks! I wasn’t expecting that answer.
I think that raises more questions than it answers, naturally. (“Okay, can an agent so capable that they can easily make a quantum-simulation to answer tests, really not find some way of effectively understanding human ontologies for decision-making?”), but it seems like this is more for Eliezer, and also, that might be part of a longer post.
This one I can answer quickly:
Could it? Maybe. But why would it? What objective, either as the agent’s internal goal or as an outer optimization signal, would incentivize the agent to bother using a human ontology at all, when it could instead use the predictively-superior quantum simulator? Like, any objective ultimately grounds out in some physical outcome or signal, and the quantum simulator is just better for predicting which actions have which effects on that physical outcome/signal.
If it’s able to function as well as it would if it understands our ontology, if not better, then why does it then matter if it doesn’t use our ontology?
I assume a system you’re describing could still be used by humans to do (basically) all of the important things. Like, we could ask it “optimize this company, in a way that we would accept, after a ton of deliberation”, and it could produce a satisfying response.
> But why would it? What objective, either as the agent’s internal goal or as an outer optimization signal, would incentivize the agent to bother using a human ontology at all, when it could instead use the predictively-superior quantum simulator?
I mean, if it can always act just as well as if it could understand human ontologies, then I don’t see the benefit of it “technically understanding human ontologies”. This seems like it is tending into some semantic argument or something.
If an agent can trivially act as if it understands Ontology X, where/why does it actually matter that it doesn’t technically “understand” ontology X?
I assume that the argument that “this distinction matters a lot” would functionally play out in there being some concrete things that it can’t do.
Bear in mind that the goal itself, as understood by the AI, is expressed in the AI’s ontology. The AI is “able to function as well as it would if it understands our ontology, if not better”, but that “as well if not better” is with respect to the goal as understood by the AI, not the goal as understood by the humans.
Like, you ask the AI “optimize this company, in a way that we would accept, after a ton of deliberation”, and it has a very-different-off-distribution notion than you about what constitutes the “company”, and counts as you “accepting”, and what it’s even optimizing the company for.
… and then we get to the part about the AI producing “a satisfying response”, and that’s where my deltas from Christiano will be more relevant.
(feel free to stop replying at any point, sorry if this is annoying)
> Like, you ask the AI “optimize this company, in a way that we would accept, after a ton of deliberation”, and it has a very-different-off-distribution notion than you about what constitutes the “company”, and counts as you “accepting”, and what it’s even optimizing the company for.
I’d assume that when we tell it, “optimize this company, in a way that we would accept, after a ton of deliberation”, this could be instead described as, “optimize this company, in a way that we would accept, after a ton of deliberation, where these terms are described using our ontology”
It seems like the AI can trivially figure out what humans would regard as the “company” or “accepting”. Like, it could generate any question like, “Would X qualify as the ’company, if asked to a human?”, and get an accurate response.
I agree that we would have a tough time understanding its goal / specifications, but I expect that it would be capable of answering questions about its goal in our ontology.
The problem shows up when the system finds itself acting in a regime where the notion of us (humans) “accepting” its optimizations becomes purely counterfactual, because no actual human is available to oversee its actions in that regime. Then the question of “would a human accept this outcome?” must ground itself somewhere in the system’s internal model of what those terms refer to, which (by hypothesis) need not remotely match their meanings in our native ontology.
This isn’t (as much of) a problem in regimes where an actual human overseer is present (setting aside concerns about actual human judgement being hackable because we don’t implement our idealized values, i.e. outer alignment), because there the system’s notion of ground truth actually is grounded by the validation of that overseer.
You can have a system that models the world using quantum field theory, task it with predicting the energetic fluctuations produced by a particular set of amplitude spikes corresponding to a human in our ontology, and it can perfectly well predict whether those fluctuations encode sounds or motor actions we’d interpret as indications of approval of disapproval—and as long as there’s an actual human there to be predicted, the system will do so without issue (again modulo outer alignment concerns).
But remove the human, and suddenly the system is no longer operating based on its predictions of the behavior of a real physical system, and is instead operating from some learned counterfactual representation consisting of proxies in its native QFT-style ontology which happened to coincide with the actual human’s behavior while the human was present. And that learned representation, in an ontology as alien as QFT, is (assuming the falsehood of the natural abstraction hypothesis) not going to look very much like the human we want it to look like.
I’m confused about what it means to “remove the human”, and why it’s so important whether the human is ‘removed’. Maybe if I try to nail down more parameters of the hypothetical, that will help with my confusion. For the sake of argument, can I assume...
That the AI is running computations involving quantum fields because it found that was the most effective way to make e.g. next-token predictions on its training set?
That the AI is in principle capable of running computations involving quantum fields to represent a genius philosopher?
If I can assume that stuff, then it feels like a fairly core task, abundantly stress-tested during training, to read off the genius philosopher’s spoken opinions about e.g. moral philosophy from the quantum fields. How else could quantum fields be useful for next-token predictions?
Another probe: Is alignment supposed to be hard in this hypothetical because the AI can’t represent human values in principle? Or is it supposed to be hard because it also has a lot of unsatisfactory representations of human values, and there’s no good method for finding a satisfactory needle in the unsatisfactory haystack? Or some other reason?
This sounds a lot like saying “it might fail to generalize”. Supposing we make a lot of progress on out-of-distribution generalization, is alignment getting any easier according to you? Wouldn’t that imply our systems are getting better at choosing proxies which generalize even when the human isn’t ‘present’?
Because the human isn’t going to constantly be present for everything the system does after it’s deployed (unless for some reason it’s not deployed).
Quantum fields are useful for an endless variety of things, from modeling genius philosophers to predicting lottery numbers. If your next-token prediction task involves any physically instantiated system, a model that uses QFT will be able to predict that system’s time-evolution with alacrity.
(Yes, this is computationally intractable, but we’re already in full-on hypothetical land with the QFT-based model to begin with. Remember, this is an exercise in showing what happens in the worst-case scenario for alignment, where the model’s native ontology completely diverges from our own.)
So we need not assume that predicting “the genius philosopher” is a core task. It’s enough to assume that the model is capable of it, among other things—which a QFT-based model certainly would be. Which, not so coincidentally, brings us to your next question:
Consider how, during training, the human overseer (or genius philosopher, if you prefer) would have been pointed out to the model. We don’t have reliable access to its internal world-model, and even if we did we’d see blobs of amplitude and not much else. There’s no means, in that setting, of picking out the human and telling the model to unambiguously defer to that human.
What must happen instead, then, is something like next-token prediction: we perform gradient descent (or some other optimization method; it doesn’t really matter for the purposes of our story) on the model’s outputs, rewarding it when its outputs happen to match those of the human. The hope is that this will lead, in the limit, to the matching no longer occurring by happenstance—that if we train for long enough and in a varied enough set of situations, the best way for the model to produce outputs that track those of the human is to model that human, even in its QFT ontology.
But do we know for a fact that this will be the case? Even if it is, what happens when the overseer isn’t present to provide their actual feedback, as was never the case during training? What becomes the model’s referent then? We’d like to deploy it without an overseer, or in situations too complex for an overseer to understand. And whether the model’s behavior in those situations conforms to what the overseer would want, ideally, depends on what kinds of behind-the-scenes extrapolation the model is doing—which, if the model’s native ontology is something in which “human philosophers” are not basic objects, is liable to look very weird indeed.
Sort of, yes—but I’d call it “malgeneralization” rather than “misgeneralization”. It’s not failing to generalize, it’s just not generalizing the way you’d want it to.
Depends on what you mean by “progress”, and “out-of-distribution”. A powerful QFT-based model can make perfectly accurate predictions in any scenario you care to put it in, so it’s not like you’ll observe it getting things wrong. What experiments, and experimental outcomes, are you imagining here, such that those outcomes would provide evidence of “progress on out-of-distribution generalization”, when fundamentally the issue is expected to arise in situations where the experimenters are themselves absent (and which—crucially—is not a condition you can replicate as part of an experimental setup)?
I think it ought to be possible for someone to always be present. [I’m also not sure it would be necessary.]
It’s not the genius philosopher that’s the core task, it’s the reading of their opinions out of a QFT-based simulation of them. As I understand this thought experiment, we’re doing next-token prediction on e.g. a book written by a philosopher, and in order to predict the next token using QFT, the obvious method is to use QFT to simulate the philosopher. But that’s not quite enough—you also need to read the next token out of that QFT-based simulation if you actually want to predict it. This sort of ‘reading tokens out of a QFT simulation’ thing would be very common, thus something the system gets good at in order to succeed at next-token prediction.
I think perhaps there’s more to your thought experiment than just alien abstractions, and it’s worth disentangling these assumptions. For one thing, in a standard train/dev/test setup, the model is arguably not really doing prediction, it’s doing retrodiction. It’s making ‘predictions’ about things which already happened in the past. The final model is chosen based on what retrodicts the data the best. Also, usually the data is IID rather than sequential—there’s no time component to the data points (unless it’s a time-series problem, which it usually isn’t). The fact that we’re choosing a model which retrodicts well is why the presence/absence of a human is generally assumed to be irrelevant, and emphasizing this factor sounds wacky to my ML engineer ears.
So basically I suspect what you’re really trying to claim here, which incidentally I’ve also seen John allude to elsewhere, is that the standard assumptions of machine learning involving retrodiction and IID data points may break down once your system gets smart enough. This is a possibility worth exploring, I just want to clarify that it seems orthogonal to the issue of alien abstractions. In principle one can imagine a system that heavily features QFT in its internal ontology yet still can be characterized as retrodicting on IID data, or a system with vanilla abstractions that can’t be characterized as retrodicting on IID data. I think exploring this in a post could be valuable, because it seems like an under-discussed source of disagreement between certain doomer-type people and mainstream ML folks.
I think I don’t understand what you’re imagining here. Are you imagining a human manually overseeing all outputs of something like ChatGPT, or Microsoft Copilot, before those outputs are sent to the end user (or, worse yet, put directly into production)?
[I also think I don’t understand why you make the bracketed claim you do, but perhaps hashing that out isn’t a conversational priority.]
It sounds like your understanding of the thought experiment differs from mine. If I were to guess, I’d guess that by “you” you’re referring to someone or something outside of the model, who has access to the model’s internals, and who uses that access to, as you say, “read” the next token out of the model’s ontology. However, this is not the setup we’re in with respect to actual models (with the exception perhaps of some fairly limited experiments in mechanistic interpretability)—and it’s also not the setup of the thought experiment, which (after all) is about precisely what happens when you can’t read things out of the model’s internal ontology, because it’s too alien to be interpreted.
In other words: “you” don’t read the next token out of the QFT simulation. The model is responsible for doing that translation work. How do we get it to do that, even though we don’t know how to specify the nature of the translation work, much less do it ourselves? Well, simple: in cases where we have access to the ground truth of the next token, e.g. because we’re having it predict an existing book passage, we simply penalize it whenever its output fails to match the next token in the book. In this way, the model can be incentivized to correctly predict whatever we want it to predict, even if we wouldn’t know how to tell it explicitly to do whatever it’s doing.
(The nature of this relationship—whereby humans train opaque algorithms to do things they wouldn’t themselves be able to write out as pseudocode—is arguably the essence of modern deep learning in toto.)
Yes, this is a reasonable description to my eyes. Moreover, I actually think it maps fairly well to the above description of how a QFT-style model might be trained to predict the next token of some body of text; in your terms, this is possible specifically because the text already exists, and retrodictions of that text can be graded based on how well they compare against the ground truth.
This, on the other hand, doesn’t sound right to me. Yes, there are certainly applications where the training regime produces IID data, but next-token prediction is pretty clearly not one of those? Later tokens are highly conditionally dependent on previous tokens, in a way that’s much closer to a time series than to some kind of IID process. Possibly part of the disconnect is that we’re imagining different applications entirely—which might also explain our differing intuitions w.r.t. deployment?
Right, so just to check that we’re on the same page: do we agree that after a (retrodictively trained) model is deployed for some use case other than retrodicting existing data—for generative use, say, or for use in some kind of online RL setup—then it’ll doing something other than retrodicting? And that in that situation, the source of (retrodictable) ground truth that was present during training—whether that was a book, a philosopher, or something else—will be absent?
If we do actually agree about that, then that distinction is really all I’m referring to! You can think of it as training set versus test set, to use a more standard ML analogy, except in this case the “test set” isn’t labeled at all, because no one labeled it in advance, and also it’s coming in from an unpredictable outside world rather than from a folder on someone’s hard drive.
Why does that matter? Well, because then we’re essentially at the mercy of the model’s generalization properties, in a way we weren’t while it was retrodicting the training set (or even the validation set, if one of those existed). If it gets anything wrong, there’s no longer any training signal or gradient to penalize it for being “wrong”—so the only remaining question is, just how likely is it to be “wrong”, after being trained for however long it was trained?
And that’s where the QFT model comes in. It says, actually, even if you train me for a good long while on a good amount of data, there are lots of ways for me to generalize “wrongly” from your perspective, if I’m modeling the universe at the level of quantum fields. Sure, I got all the retrodictions right while there was something to be retrodicted, but what exactly makes you think I did that by modeling the philosopher whose remarks I was being trained on?
Maybe I was predicting the soundwaves passing through a particularly region of air in the room he was located—or perhaps I was predicting the pattern of physical transistors in the segment of memory of a particular computer containing his works. Those physical locations in spacetime still exist, and now that I’m deployed, I continue to make predictions using those as my referent—except, the encodings I’m predicting there no longer resemble anything like coherent moral philosophy, or coherent anything, really.
The philosopher has left the room, or the computer’s memory has been reconfigured—so what exactly are the criteria by which I’m supposed to act now? Well, they’re going to be something, presumably—but they’re not going to be something explicit. They’re going to be something implicit to my QFT ontology, something that—back when the philosopher was there, during training—worked in tandem with the specifics of his presence, and the setup involving him, to produce accurate retrodictions of his judgements on various matters.
Now that that’s no longer the case, those same criteria describe some mathematical function that bears no meaningful correspondence to anything a human would recognize, valuable or not—but the function exists, and it can be maximized. Not much can be said about what maximizing that function might result in, except that it’s unlikely to look anything like “doing right according to the philosopher”.
That’s why the QFT example is important. A more plausible model, one that doesn’t think natively in terms of quantum amplitudes, permits the possibility of correctly compressing what we want it to compress—of learning to retrodict, not some strange physical correlates of the philosopher’s various motor outputs, but the actual philosopher’s beliefs as we would understand them. Whether that happens, or whether a QFT-style outcome happens instead, depends in large part on the inductive biases of the model’s architecture and the training process—inductive biases on which the natural abstraction hypothesis asserts a possible constraint.
Was using a metaphorical “you”. Probably should’ve said something like “gradient descent will find a way to read the next token out of the QFT-based simulation”.
I suppose I should’ve said various documents are IID to be more clear. I would certainly guess they are.
Generally speaking, yes.
Well, if we’re following standard ML best practices, we have a train set, a dev set, and a test set. The purpose of the dev set is to check and ensure that things are generalizing properly. If they aren’t generalizing properly, we tweak various hyperparameters of the model and retrain until they do generalize properly on the dev set. Then we do a final check on the test set to ensure we didn’t overfit the dev set. If you forgot or never learned this stuff, I highly recommend brushing up on it.
In principle we could construct a test set or dev set either before or after the model has been trained. It shouldn’t make a difference under normal circumstances. It sounds like maybe you’re discussing a scenario where the model has achieved a level of omniscience, and it does fine on data that was available during its training, because it’s able to read off of an omniscient world-model. But then it fails on data generated in the future, because the translation method for its omniscient world-model only works on artifacts that were present during training. Basically, the time at which the data was generated could constitute a hidden and unexpected source of distribution shift. Does that summarize the core concern?
(To be clear, this sort of acquired-omniscience is liable to sound kooky to many ML researchers. I think it’s worth stress-testing alignment proposals under these sort of extreme scenarios, but I’m not sure we should weight them heavily in terms of estimating our probability of success. In this particular scenario, the model’s performance would drop on data generated after training, and that would hurt the company’s bottom line, and they would have a strong financial incentive to fix it. So I don’t know if thinking about this is a comparative advantage for alignment researchers.)
BTW, the point about documents being IID was meant to indicate that there’s little incentive for the model to e.g. retrodict the coordinates of the server storing a particular document—the sort of data that could aid and incentivize omniscience to a greater degree.
In any case, I would argue that “accidental omniscience” characterizes the problem better than “alien abstractions”. As before, you can imagine an accidentally-omniscient model that uses vanilla abstractions, or a non-omniscient model that uses alien ones.
(Just to be clear: yes, I know what training and test sets are, as well as dev sets/validation sets. You might notice I actually used the phrase “validation set” in my earlier reply to you, so it’s not a matter of guessing someone’s password—I’m quite familiar with these concepts, as someone who’s implemented ML models myself.)
Generally speaking, training, validation, and test datasets are all sourced the same way—in fact, sometimes they’re literally sourced from the same dataset, and the delineation between train/dev/test is introduced during training itself, by arbitrarily carving up the original dataset into smaller sets of appropriate size. This may capture the idea of “IID” you seem to appeal to elsewhere in your comment—that it’s possible to test the model’s generalization performance on some held-out subset of data from the same source(s) it was trained on.
In ML terms, what the thought experiment points to is a form of underlying distributional shift, one that isn’t (and can’t be) captured by “IID” validation or test datasets. The QFT model in particular highlights the extent to which your training process, however broad or inclusive from a parochial human standpoint, contains many incidental distributional correlates to your training signal which (1) exist in all of your data, including any you might hope to rely on to validate your model’s generalization performance, and (2) cease to correlate off-distribution, during deployment.
This can be caused by what you call “omniscience”, but it need not; there are other, more plausible distributional differences that might be picked up on by other kinds of models. But QFT is (as far as our current understanding of physics goes) very close to the base ontology of our universe, and so what is inferrable using QFT is naturally going to be very different from what is inferrable using some other (less powerful) ontology. QFT is a very powerful ontology!
If you want to call that “omniscience”, you can, although note that strictly speaking the model is still just working from inferences from training data. It’s just that, if you feed enough data to a model that can hold entire swaths of the physical universe inside of its metaphorical “head”, pretty soon hypotheses that involve the actual state of that universe will begin to outperform hypotheses that don’t, and which instead use some kind of lossy approximation of that state involving intermediary concepts like “intent”, “belief”, “agent”, “subjective state”, etc.
You’re close; I’d say the concern is slightly worse than that. It’s that the “future data” never actually comes into existence, at any point. So the source of distributional shift isn’t just “the data is generated at the wrong time”, it’s “the data never gets externally generated to begin with, and you (the model) have to work with predictions of what the data counterfactually would have been, had it been generated”.
(This would be the case e.g. with any concept of “human approval” that came from a literal physical human or group of humans during training, and not after the system was deployed “in the wild”.)
The problem is that “vanilla” abstractions are not the most predictively useful possible abstractions, if you’ve got access to better ones. And models whose ambient hypothesis space is broad enough to include better abstractions (from the standpoint of predictive accuracy) will gravitate towards those, as is incentivized by the outer form of the training task. QFT is the extreme example of a “better abstraction”, but in principle (if the natural abstraction hypothesis fails) there will be all sorts and shapes of abstractions, and some of them will be available to us, and some of them will be available to the model, and these sets will not fully overlap—which is a concern in worlds where different abstractions lead to different generalization properties.
Indeed. I think the key thing for me is, I expect the model to be strongly incentivized to have a solid translation layer from its internal ontology to e.g. English language, due to being trained on lots of English language data. Due to Occam’s Razor, I expect the internal ontology to be biased towards that of an English-language speaker.
I’m imagining something like: early in training the model makes use of those lossy approximations because they are a cheap/accessible way to improve its predictive accuracy. Later in training, assuming it’s being trained on the sort of gigantic scale that would allow it to hold swaths of the physical universe in its head, it loses those desired lossy abstractions due to catastrophic forgetting. Is that an OK way to operationalize your concern?
I’m still not convinced that this problem is a priority. It seems like a problem which will be encountered very late if ever, and will lead to ‘random’ failures on predicting future/counterfactual data in a way that’s fairly obvious.
Nitpicky edit request: your comment contains some typos that make it a bit hard to parse (“be other”, “we it”). (So apologies if my reaction misunderstands your point.)
[Assuming that the opposite of the natural abstraction hypothesis is true—ie, not just that “not all powerful AIs share ontology with us”, but actually “most powerful AIs don’t share ontology with us”:]
I also expect that an AI with superior ontology would be able to answer your questions about its ontology, in a way that would make you feel like[1] you understand what is happening. But that isn’t the same as being able to control the AI’s actions, or being able to affect its goal specification in a predictable way (to you). You totally wouldn’t be able to do that.
([Vague intuition, needs work] I suspect that if you had a method for predictably-to-you translating from your ontology to the AI’s ontology, then this could be used to prove that you can easily find a powerful AI that shares an ontology with us. Because that AI could be basically thought of as using our ontology.)
Though note that unless you switched to some better ontology, you wouldn’t actually understand what is going on, because your ontology is so bogus that it doesn’t even make sense to talk about “you understanding [stuff]”. This might not be true for all kinds of [stuff], though. EG, perhaps our understanding of set theory is fine while our understanding of agency, goals, physics, and whatever else, isn’t.
if it can quantum-simulate a human brain, then it can in principle decode things from it as well. the question is how to demand that it do so in the math that defines the system.
Why do you assume that we need to demand this be done in “the math that defines the system”?
I would assume we could have a discussion with this higher-ontology being to find a happy specification, using our ontologies, that it can tell us we’ll like, also using our ontologies.
A 5-year-old might not understand an adult’s specific definition of “heavy”, but it’s not too hard for it to ask for a heavy thing.
I don’t at all think that’s off the table temporarily! I don’t trust that it’ll stay on the table—if the adult has malicious intent, knowing what the child means isn’t enough; it seems hard to know when it’ll stop being viable without more progress. (for example, I doubt it’ll ever be a good idea to do that with an OpenAI model, they seem highly deceptively misaligned to most of their users. seems possible for it to be a good idea with Claude.) But the challenge is how to certify that the math does in fact say the right thing to durably point to the ontology in which we want to preserve good things; at some point we have to actually understand some sort of specification that constrains what the stuff we don’t understand is doing to be what it seems to say in natural language.
I think this quantum fields example is perhaps not all that forceful, because in your OP you state
However, it sounds like you’re describing a system where we represent humans using quantum fields as a routine matter, so fitting the translation into the system isn’t sounding like a huge problem? Like, if I want to know the answer to some moral dilemma, I can simulate my favorite philosopher at the level of quantum fields in order to hear what they would say if they were asked about the dilemma. Sounds like it could be just as good as an em, where alignment is concerned.
It’s hard for me to imagine a world where developing representations that allow you to make good next-token predictions etc. doesn’t also develop representations that can somehow be useful for alignment. Would be interested to hear fleshed-out counterexamples.
My take:
Can we reason about a thermostat’s ontology? Only sort of. We can say things like “The thermostat represents the local temperature. It wants that temperature to be the same as the set point.” But the thermostat itself is only very loosely approximating that kind of behavior—imputing any sort of generalizability to it that it doesn’t actually have is an anthropmorphic fiction. And it’s blatantly a fiction, because there’s more than one way to do it—you can suppose the thermostat wants only the temperature sensor to be at the right temperature vs. it wants the whole room vs. the whole world to be at that temperature, or that it’s “changing its mind” when it breaks vs. it would want to be repaired, etc.
To the superintelligent AI, we are the thermostat. You cannot be aligned to humans purely by being smart, because finding “the human ontology” is an act of interpretation, of story-telling, not just a question of fact. Helping an AI narrow down how to interpret humans as moral patients requires giving it extra assumptions or meta-level processes. (Or as I might call it, “solving the alignment problem.”)
How can this be, if a smart AI can talk to humans intelligibly and predict their behavior and so forth, even without specifying any of my “extra assumptions”? Well, how can we interact with a thermostat in a way that it can “understand,” even without fixing any particular story about its desires? We understand how it works in our own way, and we take actions using our own understanding. Often our interactions fall in the domain of the normal functioning of the thermostat, under which several different possible stories about “what the thermostat wants” apply, and sometimes we think about such stories but mostly we don’t bother.
Your thermostat example seems to rather highlight a disanalogy: The concept of a goal doesn’t apply to the thermostat because there is apparently no fact of the matter about which counterfactual situations would satisfy such a “goal”. I think part of the reason is that the concept of a goal requires the ability to apply it to counterfactual situations. But for humans there is such a fact of the matter; there are things that would be incompatible with or required by our goals. Even though some/many other things may be neutral (neither incompatible nor necessary).
So I don’t think there are any “extra assumptions” needed. In fact, even if there were such extra assumptions, it’s hard to see how they could be relevant. (This is analogous to the ancient philosophical argument that God declaring murder to be good obviously wouldn’t make it good, so God declaring murder to be bad must be irrelevant to murder being bad.)
Pick a goal, and it’s easy to say what’s required. But pick a human, and it’s not easy to say what their goal is.
Is my goal to survive? And yet I take plenty of risky actions like driving that trade that off against other things. And even worse, I deliberately undergo some transformative experiences (e.g. moving to a different city and making a bunch of new friends) that in some sense “make me a different person.” And even worse, sometimes I’m irrational or make mistakes, but under different interpretations of my behavior different things are irrational. If you interpret me as really wanting to survive, driving is an irrational thing I do because it’s common in my culture and I don’t have a good intuitive feel for statistics. If you interpret me a different way, maybe my intuitive feeling gets interpreted as more rational but my goal changes from survival to something more complicated.
More complicated yes, but I assume the question is whether superintelligent AIs can understand what you want “overall” at least as good as other humans. And here, I would agree with ozziegooen, the answer seems to be yes—even if they otherwise tend to reason about things differently than we do. Because there seems to be a fact of the matter about what you want overall, even if it is not easy to predict. But predicting it is not obviously inhibited by a tendency to think in different terms (“ontology”). Is the worry perhaps that the AI finds the concept of “what the human wants overall” unnatural, so is unlikely to optimize for it?
“It sure seems like there’s a fact of the matter” is not a very forceful argument to me, especially in light of things like it being impossible to uniquely fit a rationality model and utility function to human behavior.
If there was no fact of the matter of what you want overall, there would be no fact of the matter of whether an AI is aligned with you or not. Which would mean there is no alignment problem.
The referenced post seems to apply specifically to IRL, which is purely based on behaviorism and doesn’t take information about the nature of the agent into account. (E.g. the fact that humans evolved from natural selection tells us a lot of what they probably want, and information about their brain could tell us how intelligent they are.) It’s also only an epistemic point about the problem of externally inferring values, not about those values not existing.
See my sequence “Reducing Goodhart” for what I (or me from a few years ago) think the impact is on the alignment problem.
Sure. But only if you already know what evolved creatures tend to want. I.e. once you have already made interpretive choices in one case, you can get some information on how well they hang together with other cases.
Simplifying somewhat: I think that my biggest delta with John is that I don’t think the natural abstraction hypothesis holds. (EG, if I believed it holds, I would become more optimistic about single-agent alignment, to the point of viewing Moloch as higher priority.) At the same time, I believe that powerful AIs will be able to understand humans just fine. My vague attempt at reconciling these two is something like this:
Humans have some ontology, in which they think about the world. This corresponds to a world model. This world model has a certain amount of prediction errors.
The powerful AI wants to have much lower prediction error than that. When I say “natural abstraction hypothesis is false”, I imagine something like: If you want to have a much lower prediction error than that, you have to use a different ontology / world-model than humans use. And in fact if you want sufficiently low error, then all ontologies that can achieve that are very different from our ontology—either (reasonably) simple and different, or very complex (and, I guess, therefore also different).
So when the AI “understands humans perfectly well”, that means something like: The AI can visualise the flawed (ie, high prediction error) model that we use to think about the world. And it does this accurately. But it also sees how the model is completely wrong, and how the things, that we say we want, only make sense in that model that has very little to do with the actual world.
(An example would be how a four-year old might think about the world in terms of Good people and Evil people. The government sometimes does Bad things because there are many Evil people in it. And then the solution is to replace all the Evil people by Good people. And that might internally make sense, and maybe an adult can understand this way of thinking, while also being like “this has nothing to do with how the world actually works; if you want to be serious about anything, just throw this model out”.)
This sounds a lot like a good/preferable thing to me. I would assume that we’d generally want AIs with ideal / superior ontologies.
It’s not clear to me why you’d think such a scenario would make us less optimistic about single-agent alignment. (If I’m understanding correctly)
As a quick reaction, let me just note that I agree that (all else being equal) this (ie, “the AI understanding us & having superior ontology”) seems desirable. And also that my comment above did not present any argument about why we should be pessimistic about AI X-risk if we believe that the natural abstraction hypothesis is false. (I was just trying to explain why/how “the AI has a different ontology” is compatible with “the AI understands our ontology”.)
As a longer reaction: I think my primary reason for pessimism, if natural abstraction hypothetis is false, is that a bunch of existing proposals might work if the hypothesis were true, but don’t work if the hypothesis is false. (EG, if the hypothesis is true, I can imagine that “do a lot of RLHF, and then ramp up the AIs intelligence” could just work. Similarly for “just train the AI to not be deceptive”.)
If I had to gesture at an underlying principle, then perhaps it could be something like: Suppose we successfully code up an AI which is pretty good at optimising, or create a process which gives rise to such an AI. [Inference step missing here.] Then the goals and planning of this AI will be happening in some ontology which allows for low prediction error. But this will be completely alien to our ontology. [Inference step missing here.] And, therefore, things that score very highly with respect to these (“alien”) goals will have roughly no value[1] according to our preferences.
(I am not quite clear on this, but I think that if this paragraph was false, then you could come up with a way of falsifying my earlier description of how it looks like when the natural abstraction hypothesis is false.)
IE, no positive value, but also no negative value. So no S-risk.
Thanks for that explanation.
Thanks, this makes sense to me.
Yea, I guess I’m unsure about that ‘[Inference step missing here.]’. My guess is that such system would be able to recognize situations where things that score highly with respect to its ontology, would score lowly, or would be likely to score lowly, using a human ontology. Like, it would be able to simulate a human deliberating on this for a very long time and coming to some conclusion.
I imagine that the cases where this would be scary are some narrow ones (though perhaps likely ones) where the system is both dramatically intelligent in specific ways, but incredibly inept in others. This ineptness isn’t severe enough to stop it from taking over the world, but it is enough to stop it from being at all able to maximize goals—and it also doesn’t take basic risk measures like “just keep a bunch of humans around and chat to them a whole lot, when curious”, or “try to first make a better AI that doesn’t have these failures, before doing huge unilateralist actions” for some reason.
It’s very hard for me to imagine such an agent, but that doesn’t mean it’s not possible, or perhaps likely.
[I am confused about your response. I fully endorse your paragraph on “the AI with superior ontology would be able to predict how humans would react to things”. But then the follow-up, on when this would be scary, seems mostly irrelevant / wrong to me—meaning that I am missing some implicit assumptions, misunderstanding how you view this, etc. I will try react in a hopefully-helpful way, but I might be completely missing the mark here, in which case I apologise :).]
I think the problem is that there is a difference between:
(1) AI which can predict how things score in human ontology; and
(2) AI which has “select things that score high in human ontology” as part of its goal[1].
And then, in the worlds where natural abstraction hypothesis is false: Most AIs achieve (1) as a by-product of the instrumental sub-goal of having low prediction error / being selected by our training processes / being able to manipulate humans. But us successfully achieving (2) for a powerful AI would require the natural abstraction hypothesis[2].
And this leaves us two options. First, maybe we just have no write access to the AI’s utility function at all. (EG, my neighbour would be very happy if I gave him $10k, but he doesn’t have any way of making me (intrinsincally) desire doing that.) Second, we might have a write access to the AI’s utility function, but not in a way that will lead to predictable changes in goals or behaviour. (EG, if you give me full access to weights of an LLM, it’s not like I know how to use that to turn that LLM into an actually-helpful assistant.)
(And both of these seem scary to me, because of the argument that “not-fully-aligned goal + extremely powerful optimisation ==> extinction”. Which I didn’t argue for here.)
IE, not just instrumentally because it is pretending to be aligned while becoming more powerful, etc.
More precisely: Damn, we need a better terminology here. The way I understand things, “natural abstraction hypothesis” is the claim that most AIs will converge to an ontology that is similar to ours. The negation of that is that a non-trivial portion of AIs will use an ontology that is different from ours. What I subscribe to is that “almost no powerful AIs will use an ontology that is similar to ours”. Let’s call that “strong negation” of the natural abstraction hypothesis. So achieving (2) would be a counterexample to this strong negation.
Ironically, I believe the strong negation hypothesis because I expect that very powerful AIs will arrive at similar ways of modelling the world—and those are all different from how we model the world.
This, however likely, is not certain. A possible way for this assumption to fail is when a system allocates minimal cognitive capacity to its internal ontology and remaining power to selecting best actions; this may be a viable strategy if system’s world model is still enough descriptive but does not have extra space to represent human ontology fully.