I am much more optimistic about ML not generalizing (by default) to dangerous capabilities and domains than what I perceive is Yudkowsky’s position. I found this to be a relatively legible area of disagreement (from my perspective), and illustrative of key reasons why I’m not-hopeless about doing very impactful things safely with current ML, so I have taken the time to outline my thoughts below.
A piece of my position.
Here’s one thing I believe: You can do the following things basically safely:
(Formal) theorem-proving
(In line with Yudkowsky, I mean “old-school fancy-math theorems”: things like theorems in Arxiv pure math papers)
(I don’t expect this to be particularly useful, but mentioning it as a safe-thing-you-can-do.)
Programming and software-engineering
I mean things like implementing a machine learning experiment based on a natural-ish language task description, with intended solutions spanning less than 10^5 lines of Python
Analyzing experimental results, generating new experiment ideas and writing descriptions for the implementations of these ideas
Namely in the context of ML experiments, not science in general
More precisely: if you do “the obvious things” in the current paradigm (transformers + RL + maybe Monte Carlo Tree Search), aim for getting (merely) top-human-level results in the above domains, and take “obvious” safety/containment measures, you will obtain top-human-level performance in these domains without the systems posing non-negligible catastrophic risk.
Some “obvious” safety/containment measures I have in mind:
Sandbox the environments in which you run the models.
During training, test every once in a while that the model isn’t capable of doing tasks that we didn’t train the model for and which want the model to not be good at.
From my perspective it’s quite clear that a not-vastly-superhuman theorem-prover won’t generalize to e.g. escaping their sandbox environment, for reasons explained below, but if you are worried about this, maybe you could do things like “check that it isn’t good at strategy board games involving theory-of-mind / hidden information” or “check it can’t find non-trivial security vulnerabilities in given programs”, by fine-tuning and evaluating the model on these tasks. These tasks seem like something you could format as theorem-proving. For the software-engineer and science AIs you can similarly test generalization capability.
Don’t train the model to predict all the text found on the Internet
(How do you get the model to understand natural-ish language, then? My answer: train the model on text generated by current LLMs, with the aim of getting the relevant models understand natural language and instructions without leaking tons of information about the world.)
Generally limit the information they get about the world.
Have another model (and other metrics) evaluate that the model’s outputs are not anomalous; if they are, halt the model.
It’s not clear that this alone would be sufficient to end the acute risk period, but:
I think that there are things in these areas that are useful (like building increasingly useful things in increasingly realistic physics simulations).
the reasons I’m relatively optimistic about the safety of these things also carry over (though to a lesser extent) more generally to the current LLM-dominated paradigm, which more obviously allows for very impactful things.
(I acknowledge that this part is cruxy, but won’t discuss it further here.)
Why do I think these things are safe?
Three reasons:
1: The information the model has about the world is very limited, and utilizing that information seems computationally very difficult.
It seems incredibly hard for a model to locate itself in Tegmark IV, or whatever it is that Reality is, based on only the data it sees in the above training process.
Note also that the models I’m picturing have no “memory” beside what gets carved to them by the training process (e.g. SGD). This doesn’t prevent the model from “remembering” and “accessing” information from past inputs to leverage in the current context—obviously there’s information flow there—but it does make it much more difficult. (See e.g. the Connecting the Dots paper for more on the capabilities of LLMs here.)
2: There is little pressure for the model to think deeply about reality outside of the inputs it sees.
Yudkowsky responds to a similar comment by Ngo with “that’s not where humans came from”. I find this response unpersuasive (both in its original context and if one were to apply it here) for multiple reasons:
I think there’s clearly more pressure in the direction of “generally understand the world” by inclusive genetic fitness than there is for a theorem-prover, software-engineer or an experimental-result-analyzer by SGD.
The details of the training process for nanoengineering (the example used by Ngo and Yudkowsky) were too unclear for me to comment on that.
There is a massive difference in the feasibility of understanding the world in these two cases; deducing useful information about the world and leveraging that is vastly easier for humans.
The distribution humans are trained on is vastly richer than what these ML models would be trained on.
As Ngo points out, the genomic bottleneck pushes towards brains that can figure things out on the fly.
3: It’s actually very hard for a model in this setting to take catastrophic actions.
E.g. you need situational awareness and theory of mind at quite a deep level, forming plans that extend beyond the current context, coordinating with copies of yourself (if you can’t take over during a single input), actually coming up with a plan that allows for taking over (and which bypasses any containment and other measures humans may have built).
In addition to general capabilities such as theory of mind, pulling off a takeover plan seems like it requires quite a lot of specific information about the model’s situation that might be very difficult to access. E.g. assuming that a software-engineer-model understands the programs it writes are being run in a sandbox environment in a data center, it seems be very hard to go from that to taking over the world if the model lacks any specific information about the sandbox, data center, Internet, and the physical entities in the world, especially if humans put even a half-hearted effort in containment.
I think observations from LLM are a reason for optimism: despite being trained on (relatively) very rich data that clearly incentivizes thinking about reality as a whole, and despite them being pretty decent in domains like programming and science, they are pretty bad at these skills and capabilities I outlined. For example, GPT-4 level models stumble in theory-of-mind (e.g. here and here) and are not terribly good at situational awareness (see here), despite these being (relatively) very training-relevant. And clearly GPT-4 level models are, by a wide margin, incapable of escaping containment even without any particular countermeasures.
In aggregate I find these arguments compelling for expecting models to not generalizing to human-level, or clearly superhuman level, in things like escaping containment that we very much didn’t train the model to do, before the model is vastly superhuman in the thing we did train it for.
Reactions to arguments given by Yudkowsky.
One argument Yudkowsky gives is
I put to you that there is a predictable bias in your estimates, where you don’t know about the Deep Stuff that is required to prove theorems, so you imagine that certain cognitive capabilities are more disjoint than they actually are. If you knew about the things that humans are using to reuse their reasoning about chipped handaxes and other humans, to prove math theorems, you would see it as more plausible that proving math theorems would generalize to chipping handaxes and manipulating humans.
There’s an important asymmetry between
“Things which reason about chipped handaxes and other Things can prove math theorems”
and
“Things which can prove math theorems can reason about chipped handaxes and other Things”,
namely that math is a very fundamental thing in a way that chipping handaxes and manipulating humans are not.
I do grant there is math underlying those skills (e.g. 3D Euclidean geometry, mathematical physics, game theory, information theory), and one can formulate math theorems that essentially correspond to e.g. chipping handaxes, so as domains theorem-proving and handaxe-chipping are not disjoint. But the degree of generalization one needs for a theorem-prover trained on old-school fancy-math theorems to solve problems like manipulating humans is very large.
There’s also this interaction:
Ngo: And that if you put agents in environments where they answer questions but don’t interact much with the physical world, then there will be many different traits which are necessary for achieving goals in the real world which they will lack, because there was little advantage to the optimiser of building those traits in.
Yudkowsky: I’ll observe that TransformerXL built an attention window that generalized, trained it on I think 380 tokens or something like that, and then found that it generalized to 4000 tokens or something like that.
I think Ngo’s point is very reasonable, and I feel underwhelmed by Yudkowsky’s response: I think it’s a priori reasonable to expect attention mechanisms to generalize, by design, to a larger number of tokens, and this is a very weak form of generalization in comparison to what is needed for takeover.
Overall I couldn’t find object-level arguments by Yudkowsky for expecting strong generalization that I found compelling (in this discussion or elsewhere). There are many high-level conceptual points Yudkowsky makes (e.g. sections 1.1 and 1.2 of this post has many hard-to-quote parts that I appreciated, and he of course has written a lot along the years) that I agree with and which point towards “there are laws of cognition that underlie seemingly-disjoint domains”. Ultimately I still think the generalization problems are quantitatively difficult enough that you can get away with building superhuman models in narrow domains, without them posing non-negligible catastrophic risk.
In his later conversation with Ngo, Yudkowsky writes (timestamp 17:46 there) about the possibility of doing science with “shallow” thoughts. Excerpt:
then I ask myself about people in 5 years being able to use the shallow stuff in any way whatsoever to produce the science papers
and of course the answer there is, “okay, but is it doing that without having shallowly learned stuff that adds up to deep stuff which is why it can now do science”
and I try saying back “no, it was born of shallowness and it remains shallow and it’s just doing science because it turns out that there is totally a way to be an incredibly mentally shallow skillful scientist if you think 10,000 shallow thoughts per minute instead of 1 deep thought per hour”
and my brain is like, “I cannot absolutely rule it out but it really seems like trying to call the next big surprise in 2014 and you guess self-driving cars instead of Go because how the heck would you guess that Go was shallower than self-driving cars”
I stress that my reasons for relative optimism are not only about “shallowness” of the thought, but in addition about the model being trained on a very narrow domain, causing it to lack a lot of the very out-of-distribution capabilities and information it would need to cause a catastrophe.
I first considered making a top-level post about this, but it felt kinda awkward, since a lot of this is a response to Yudkowsky (and his points in this post in particular) and I had to provide a lot of context and quotes there.
(I do have some posts about AI control coming up that are more standalone “here’s what I believe”, but that’s a separate thing and does not directly respond to a Yudkowskian position.)
Making a top-level post of course gets you more views and likes and whatnot; I’m sad that high-quality comments on old posts very easily go unnoticed and get much less response than low-quality top-level posts. It might be locally sensible to write a shortform that says “hey I wrote this long effort-comment, maybe check it out”, but I don’t like this being the solution either. I would like to see the frontpage allocating relatively more attention towards this sort of thing over a flood of new posts. (E.g. your effort-comments strike me as “this makes most sense as a comment, but man, the site does currently give this stuff very little attention”, and I’m not happy about this.)
I am much more optimistic about ML not generalizing (by default) to dangerous capabilities and domains than what I perceive is Yudkowsky’s position. I found this to be a relatively legible area of disagreement (from my perspective), and illustrative of key reasons why I’m not-hopeless about doing very impactful things safely with current ML, so I have taken the time to outline my thoughts below.
A piece of my position.
Here’s one thing I believe: You can do the following things basically safely:
(Formal) theorem-proving
(In line with Yudkowsky, I mean “old-school fancy-math theorems”: things like theorems in Arxiv pure math papers)
(I don’t expect this to be particularly useful, but mentioning it as a safe-thing-you-can-do.)
Programming and software-engineering
I mean things like implementing a machine learning experiment based on a natural-ish language task description, with intended solutions spanning less than 10^5 lines of Python
Analyzing experimental results, generating new experiment ideas and writing descriptions for the implementations of these ideas
Namely in the context of ML experiments, not science in general
More precisely: if you do “the obvious things” in the current paradigm (transformers + RL + maybe Monte Carlo Tree Search), aim for getting (merely) top-human-level results in the above domains, and take “obvious” safety/containment measures, you will obtain top-human-level performance in these domains without the systems posing non-negligible catastrophic risk.
Some “obvious” safety/containment measures I have in mind:
Sandbox the environments in which you run the models.
During training, test every once in a while that the model isn’t capable of doing tasks that we didn’t train the model for and which want the model to not be good at.
From my perspective it’s quite clear that a not-vastly-superhuman theorem-prover won’t generalize to e.g. escaping their sandbox environment, for reasons explained below, but if you are worried about this, maybe you could do things like “check that it isn’t good at strategy board games involving theory-of-mind / hidden information” or “check it can’t find non-trivial security vulnerabilities in given programs”, by fine-tuning and evaluating the model on these tasks. These tasks seem like something you could format as theorem-proving. For the software-engineer and science AIs you can similarly test generalization capability.
Don’t train the model to predict all the text found on the Internet
(How do you get the model to understand natural-ish language, then? My answer: train the model on text generated by current LLMs, with the aim of getting the relevant models understand natural language and instructions without leaking tons of information about the world.)
Generally limit the information they get about the world.
Have another model (and other metrics) evaluate that the model’s outputs are not anomalous; if they are, halt the model.
It’s not clear that this alone would be sufficient to end the acute risk period, but:
I think that there are things in these areas that are useful (like building increasingly useful things in increasingly realistic physics simulations).
the reasons I’m relatively optimistic about the safety of these things also carry over (though to a lesser extent) more generally to the current LLM-dominated paradigm, which more obviously allows for very impactful things.
(I acknowledge that this part is cruxy, but won’t discuss it further here.)
Why do I think these things are safe?
Three reasons:
1: The information the model has about the world is very limited, and utilizing that information seems computationally very difficult.
It seems incredibly hard for a model to locate itself in Tegmark IV, or whatever it is that Reality is, based on only the data it sees in the above training process.
Note also that the models I’m picturing have no “memory” beside what gets carved to them by the training process (e.g. SGD). This doesn’t prevent the model from “remembering” and “accessing” information from past inputs to leverage in the current context—obviously there’s information flow there—but it does make it much more difficult. (See e.g. the Connecting the Dots paper for more on the capabilities of LLMs here.)
2: There is little pressure for the model to think deeply about reality outside of the inputs it sees.
Yudkowsky responds to a similar comment by Ngo with “that’s not where humans came from”. I find this response unpersuasive (both in its original context and if one were to apply it here) for multiple reasons:
I think there’s clearly more pressure in the direction of “generally understand the world” by inclusive genetic fitness than there is for a theorem-prover, software-engineer or an experimental-result-analyzer by SGD.
The details of the training process for nanoengineering (the example used by Ngo and Yudkowsky) were too unclear for me to comment on that.
There is a massive difference in the feasibility of understanding the world in these two cases; deducing useful information about the world and leveraging that is vastly easier for humans.
The distribution humans are trained on is vastly richer than what these ML models would be trained on.
As Ngo points out, the genomic bottleneck pushes towards brains that can figure things out on the fly.
3: It’s actually very hard for a model in this setting to take catastrophic actions.
E.g. you need situational awareness and theory of mind at quite a deep level, forming plans that extend beyond the current context, coordinating with copies of yourself (if you can’t take over during a single input), actually coming up with a plan that allows for taking over (and which bypasses any containment and other measures humans may have built).
In addition to general capabilities such as theory of mind, pulling off a takeover plan seems like it requires quite a lot of specific information about the model’s situation that might be very difficult to access. E.g. assuming that a software-engineer-model understands the programs it writes are being run in a sandbox environment in a data center, it seems be very hard to go from that to taking over the world if the model lacks any specific information about the sandbox, data center, Internet, and the physical entities in the world, especially if humans put even a half-hearted effort in containment.
I think observations from LLM are a reason for optimism: despite being trained on (relatively) very rich data that clearly incentivizes thinking about reality as a whole, and despite them being pretty decent in domains like programming and science, they are pretty bad at these skills and capabilities I outlined. For example, GPT-4 level models stumble in theory-of-mind (e.g. here and here) and are not terribly good at situational awareness (see here), despite these being (relatively) very training-relevant. And clearly GPT-4 level models are, by a wide margin, incapable of escaping containment even without any particular countermeasures.
(The takes here are standard AI control points.)
In aggregate I find these arguments compelling for expecting models to not generalizing to human-level, or clearly superhuman level, in things like escaping containment that we very much didn’t train the model to do, before the model is vastly superhuman in the thing we did train it for.
Reactions to arguments given by Yudkowsky.
One argument Yudkowsky gives is
There’s an important asymmetry between
“Things which reason about chipped handaxes and other Things can prove math theorems”
and
“Things which can prove math theorems can reason about chipped handaxes and other Things”,
namely that math is a very fundamental thing in a way that chipping handaxes and manipulating humans are not.
I do grant there is math underlying those skills (e.g. 3D Euclidean geometry, mathematical physics, game theory, information theory), and one can formulate math theorems that essentially correspond to e.g. chipping handaxes, so as domains theorem-proving and handaxe-chipping are not disjoint. But the degree of generalization one needs for a theorem-prover trained on old-school fancy-math theorems to solve problems like manipulating humans is very large.
There’s also this interaction:
I think Ngo’s point is very reasonable, and I feel underwhelmed by Yudkowsky’s response: I think it’s a priori reasonable to expect attention mechanisms to generalize, by design, to a larger number of tokens, and this is a very weak form of generalization in comparison to what is needed for takeover.
Overall I couldn’t find object-level arguments by Yudkowsky for expecting strong generalization that I found compelling (in this discussion or elsewhere). There are many high-level conceptual points Yudkowsky makes (e.g. sections 1.1 and 1.2 of this post has many hard-to-quote parts that I appreciated, and he of course has written a lot along the years) that I agree with and which point towards “there are laws of cognition that underlie seemingly-disjoint domains”. Ultimately I still think the generalization problems are quantitatively difficult enough that you can get away with building superhuman models in narrow domains, without them posing non-negligible catastrophic risk.
In his later conversation with Ngo, Yudkowsky writes (timestamp 17:46 there) about the possibility of doing science with “shallow” thoughts. Excerpt:
I stress that my reasons for relative optimism are not only about “shallowness” of the thought, but in addition about the model being trained on a very narrow domain, causing it to lack a lot of the very out-of-distribution capabilities and information it would need to cause a catastrophe.
I recommend making this into a post at some point (not necessarily right now, given that you said it is only “a piece” of your position).
I first considered making a top-level post about this, but it felt kinda awkward, since a lot of this is a response to Yudkowsky (and his points in this post in particular) and I had to provide a lot of context and quotes there.
(I do have some posts about AI control coming up that are more standalone “here’s what I believe”, but that’s a separate thing and does not directly respond to a Yudkowskian position.)
Making a top-level post of course gets you more views and likes and whatnot; I’m sad that high-quality comments on old posts very easily go unnoticed and get much less response than low-quality top-level posts. It might be locally sensible to write a shortform that says “hey I wrote this long effort-comment, maybe check it out”, but I don’t like this being the solution either. I would like to see the frontpage allocating relatively more attention towards this sort of thing over a flood of new posts. (E.g. your effort-comments strike me as “this makes most sense as a comment, but man, the site does currently give this stuff very little attention”, and I’m not happy about this.)