Your core claim is that all of these five questions need to be answered to minimize AI X-risk. Not only do I disagree with this, I claim that zero of these questions need to be answered to minimize AI X-risk.
Let’s go through them in order...
What is the predicted architecture of the learning algorithm(s) used by AGI?
My mainline vision for a theory of alignment and agency would be sort of analogous to thermodynamics. Thermodynamics does not care about what architecture we use for our heat engines. Rather, it establishes the universal constraints which apply to all possible heat engines. (… or at least all heat engines which work with more-than-exponentially-tiny-probability.) Likewise, I want a theory of alignment and agency which establishes the universal constraints which apply to all agents (or at least all agents which “work” with more-than-exponentially-tiny-probability).
Why would we expect to be able to find such a theory? One argument: we don’t expect that the alignment problem itself is highly-architecture dependent; it’s a fairly generic property of strong optimization. So, “generic strong optimization” looks like roughly the right level of generality at which to understand alignment. (This is not the only argument for our ability to find such a theory, but it’s a relatively simple one which doesn’t need a lot of foundations.) Trying to zoom in on something narrower than that would add a bunch of extra constraints which are effectively “noise”, for purposes of understanding alignment.
On top of that, there’s the obvious problem that if we try to solve alignment for a particular architecture, it’s quite probable that some other architecture will come along and all our work will be obsolete. (At the current pace of ML progress, this seems to happen roughly every 5 years.)
Put all that together, and I think this question is not only unnecessary, but plausibly actively harmful as a guide for alignment research.
(I also note that you have a whole section in your post on question 2 which correctly identifies most of the points I just made; all it’s missing is the step of “oh, maybe we just don’t actually need to know about the details of the architecture at all”.)
What are the most likely bad outcomes of this learning architecture?
What are the control proposals for minimizing these bad outcomes?
I also think these two together are potentially actively harmful. I think the best explanation of this view is currently Yudkowsky’s piece on Security Mindset; “figure out the most likely bad outcomes and then propose solutions to minimize these bad outcomes” is exactly what he’s arguing against. One sentence summary: it’s the unknown unknowns that kill us. The move we want is not “brainstorm failure modes and then avoid the things we brainstormed”, it’s “figure out what we want and then come up with a strategy which systematically achieves it (automatically ruling out huge swaths of failure modes simultaneously)”.
What are the implementation proposals for these control proposals?
Setting aside that I don’t agree with the “control proposals” framing, this question comes the closest to being actually necessary. Certainly we’ll need implementations of something at some point.
On the other hand, starting from where we are now, I expect implementation to be relatively easy once we have any clue at all what to implement. So even if it’s technically necessary to answer at some point, this question might not be very useful to think about ahead of time. We could solve the problem to a point where AI risk is minimized without necessarily putting significant thought into implementation proposals, especially if the core math ends being obviously-tractable. (Though, to be clear, I don’t think that’s a good idea; trying to build a great edifice of theory without empirical feedback of some kind is rarely useful in practice.)
What is the predicted timeline for the development of AGI?
Personally, I consider timelines approximately-irrelevant for my research plans. Whatever the probable-shortest-path is to aligned AI, that’s the path to follow, regardless of how long we have.
The case for timeline-relevance is usually “well, if we don’t have any hope of properly solving the problem in time, then maybe we need a hail Mary”. That’s a valid argument in principle, but in practice, when we multiply together probability-of-hail-Mary-actually-working vs probability-that-AI-is-coming-that-soon, I expect that number to basically-never favor the hail Mary. It would require too high a probability of the Hail Mary working, and too little uncertainty about AGI being right around the corner.
Now, I do expect other people to disagree with that argument (mainly because they have less hope about solving alignment anytime soon without a Hail Mary). But remember that the post’s original claim is that timeline estimates are necessary for alignment, which seems like far too strong a claim when I’m sitting here with an at-least-internally-coherent view in which timelines are mostly irrelevant.
More Generally...
Zooming out a level, I think the methodology used to generate these questions is flawed. If you want to identify necessary subquestions, then the main way I know how to do that is to consider a wide variety of approaches, and look for subquestions which are clearly crucial to all of them. Then, try to generate further approaches which circumvent those subquestions, and that counterexample-search-process will probably make clear why the subquestions are necessary.
When I imagine what process would generate the questions in this post, I imagine starting with one single approach, looking for subquestions which are clearly crucial to that one approach, and then trying to come up with arguments that those subquestions are necessary (without really searching for necessity-counterexamples to stress-test those arguments).
If I’ve mischaracterized your process, then I apologize in advance, but currently this hypothesis seems pretty likely.
My recommendation is to go find some entirely different approaches, look for patterns which hold up across approaches, and consider what underlying features of the problem generate those patterns.
On The Bright Side
Complaining aside, you’ve clearly correctly understood that the subquestions need to be necessary subquestions in order to form a paradigm; that necessity is what allows the paradigm to generalize across the work done by many different people.
I do think that insight is the rate-limiting factor for most people explicitly trying to come up with paradigms. So well done there! I think you’re already past the biggest barrier. The next few barriers will involve a lot of frustrating work, a lot of coming up with frameworks which seem good to you only to have other people shoot holes in them, but I think you are probably capable of doing it if you decide to pursue it for a while.
Thanks for taking the time to write up your thoughts! I appreciate your skepticism. Needless to say, I don’t agree with most of what you’ve written—I’d be very curious to hear if you think I’m missing something:
[We] don’t expect that the alignment problem itself is highly-architecture dependent; it’s a fairly generic property of strong optimization. So, “generic strong optimization” looks like roughly the right level of generality at which to understand alignment...Trying to zoom in on something narrower than that would add a bunch of extra constraints which are effectively “noise”, for purposes of understanding alignment.
Surely understanding generic strong optimization is necessary for alignment (as I also spend most of Q1 discussing). How can you be so sure, however, that zooming into something narrower would effectively only add noise? You assert this, but this doesn’t seem at all obvious to me. I write in Q2: “It is also worth noting immediately that even if particular [alignment problems] are architecture-independent [your point!], it does not necessarily follow that the optimal control proposals for minimizing those risks would also be architecture-independent! For example, just because an SL-based AGI and an RL-based AGI might both hypothetically display tendencies towards instrumental convergence does not mean that the way to best prevent this outcome in the SL AGI would be the same as in the RL AGI.”
By analogy, consider the more familiar ‘alignment problem’ of training dogs (i.e., getting the goals of dogs to align with the goals of their owners). Surely there are ‘breed-independent’ strategies for doing this, but it is not obvious that these strategies will be sufficient for every breed—e.g., Afghan Hounds are apparently way harder to train, than, say, Golden Retrievers. So in addition to the generic-dog-alignment-regime, Afghan hounds require some additional special training to ensure they’re aligned. I don’t yet understand why you are confident that different possible AGIs could not follow this same pattern.
On top of that, there’s the obvious problem that if we try to solve alignment for a particular architecture, it’s quite probable that some other architecture will come along and all our work will be obsolete. (At the current pace of ML progress, this seems to happen roughly every 5 years.)
I think that you think that I mean something far more specific than I actually do when I say “particular architecture,” so I don’t think this accurately characterizes what I believe. I describe my view in the next post.
[It’s] the unknown unknowns that kill us. The move we want is not “brainstorm failure modes and then avoid the things we brainstormed”, it’s “figure out what we want and then come up with a strategy which systematically achieves it (automatically ruling out huge swaths of failure modes simultaneously)”.
I think this is a very interesting point (and I have not read Eliezer’s post yet, so I am relying on your summary), but I don’t see what the point of AGI safety research is if we take this seriously. If the unknown unknowns will kill us, how are we to avoid them even in theory? If we can articulate some strategy for addressing them, they are not unknown unknowns; they are “increasingly-known unknowns!”
I spent the entire first post of this sequence devoted to “figuring out what we want” (we = AGI safety researchers). It seems like what we wantis to avoid AGI-induced existential risks. (I am curious if you think this is wrong?) If so, I claim, here is a “strategy that might systematically achieve this:” we need to understand what we mean when we say AGI (Q1), figure out what risks are likely to emerge from AGI (Q2), mitigate these risks (Q3), and implement these mitigation strategies (Q4).
If by “figure out what we want,” you mean “figure out what we want out of an AGI,” I definitely agree with this (see Robert’s great comment below!). If by “figure out what we want,” you mean “figure out what we want out of AGI safety research,” well, that is the entire point of this sequence!
I expect implementation to be relatively easy once we have any clue at all what to implement. So even if it’s technically necessary to answer at some point, this question might not be very useful to think about ahead of time.
I completely disagree with this. It will definitely depend on the competitiveness of the relevant proposals, the incentives of the people who have control over the AGI, and a bunch of other stuff that I discuss in Q4 (which hasn’t even been published yet—I hope you’ll read it!).
in practice, when we multiply together probability-of-hail-Mary-actually-working vs probability-that-AI-is-coming-that-soon, I expect that number to basically-never favor the hail Mary.
When you frame it this way, I completely agree. However, there is definitely a continuous space of plausible timelines between “all-the-time-in-the-world” and “hail-Mary,” and I think the probabilities of success [P(success|timeline) * P(timeline)] fluctuate non-obviously across this spectrum. Again, I hope you will withhold your final judgment of my claim until you see how I defend it in Q5! (I suppose my biggest regret in posting this sequence is that I didn’t just do it all at once.)
Zooming out a level, I think the methodology used to generate these questions is flawed. If you want to identify necessary subquestions, then the main way I know how to do that is to consider a wide variety of approaches, and look for subquestions which are clearly crucial to all of them.
I think this is a bit uncharitable. I have worked with and/or talked to lots of different AGI safety researchers over the past few months, and this framework is the product of my having “consider[ed] a wide variety of approaches, and look for subquestions which are clearly crucial to all of them.” Take, for instance, this chart in Q1—I am proposing a single framework for talking about AGI that potentially unifies brain-based vs. prosaic approaches. That seems like a useful and productive thing to be doing at the paradigm-level.
I definitely agree that things like how we define ‘control’ and ‘bad outcomes’ might differ between approaches, but I do claim that every approach I have encountered thus far operates using the questions I pose here (e.g., every safety approach cares about AGI architectures, bad outcomes, control, etc. of some sort). To test this claim, I would very much appreciate the presentation of a counterexample if you think you have one!
Thanks again for your comment, and I definitely want to flag that, in spite of disagreeing with it in the ways I’ve tried to describe above, I really do appreciate your skepticism and engagement with this sequence (I cite your preparadigmatic claim a number of times in it).
As I said to Robert, I hope this sequence is read as something much more like a dynamic draft of a theoretical framework than my Permanent Thoughts on Paradigms for AGI Safety™.
Surely understanding generic strong optimization is necessary for alignment (as I also spend most of Q1 discussing). How can you be so sure, however, that zooming into something narrower would effectively only add noise? You assert this, but this doesn’t seem at all obvious to me.
I mean, I don’t actually need to defend the assertion all that much. Your core claim is that these questions are necessary, and therefore the burden is on you to argue not only that zooming in on something narrower might not just add noise, but that zooming in on something narrower will not just add noise. If it’s possible that we could get to a point where AGI is no longer a serious threat without needing to answer the question, then the question is not necessary.
Also, regarding the Afghan hound example, I’d guess (without having read anything about the subject) that training Afghan hounds does not actually involve qualitatively different methods than training other dogs, they just need more of the same training and/or perform less well with the same level of training. Not that that’s particularly central. The more important part is that I do not need to be confident that “different possible AGIs could not follow this same pattern”; you’ve taken upon yourself the burden of arguing that different possible AGIs must follow this pattern, otherwise question 1 might not be necessary.
If by “figure out what we want,” you mean “figure out what we want out of an AGI,” I definitely agree with this (see Robert’s great comment below!).
That is basically what I mean, yes. I strongly recommend the Yudkowsky piece.
I completely disagree with [implementation being relatively easy/unhelpful to think about ahead of time]. It will definitely depend on the competitiveness of the relevant proposals, the incentives of the people who have control over the AGI, and a bunch of other stuff that I discuss in Q4 (which hasn’t even been published yet—I hope you’ll read it!).
Remember that if you want to argue necessity of the question, then it’s not enough for these inputs to be relevant to the outcome of AGI, you need to argue that the question must be answered in order for AGI to go well. Just because some factors are relevant to the outcome does not mean that we must know those factors in advance in order to robustly achieve a good outcome.
However, there is definitely a continuous space of plausible timelines between “all-the-time-in-the-world” and “hail-Mary,” and I think the probabilities of success [P(success|timeline) * P(timeline)] fluctuate non-obviously across this spectrum
Remember that if you want to argue necessity of the question, it is not enough for you to think that the probabilities fluctuate; you need a positive argument that the probabilities must fluctuate across the spectrum, by enough that the question must be addressed.
I definitely agree that things like how we define ‘control’ and ‘bad outcomes’ might differ between approaches, but I do claim that every approach I have encountered thus far operates using the questions I pose here (e.g., every safety approach cares about AGI architectures, bad outcomes, control, etc. of some sort). To test this claim, I would very much appreciate the presentation of a counterexample if you think you have one!
I think most of the strategies in MIRI’s general cluster do not depend on most of these questions.
If it’s possible that we could get to a point where AGI is no longer a serious threat without needing to answer the question, then the question is not necessary.
Agreed, this seems like a good definition for rendering anything as ‘necessary.’
My claim is that answering these questions is probably necessary for achieving this goal—i.e., P(achieving goal | failing to think about one or more of these questions) ≈ 0. (I say, “I am claiming that a research agenda that neglects these questions would probably not actually be viable for the goal of AGI safety work.”)
That is, we would be exceedingly lucky if we achieve AGI safety’s goal without thinking about
what we mean when we say AGI (Q1),
what existential risks are likely to emerge from AGI (Q2),
how to address these risks (Q3),
how to implement these mitigation strategies (Q4), and
how quickly we actually need to answer these questions (Q5).
I really don’t see how it could be any other way: if we want to avoid futures in which AGI does bad stuff, we need to think about avoiding (Q3/Q4) the bad stuff (Q2) that AGI (Q1) might do (and we have to do this all “before the deadline;” Q5). I propose a way to do this hierarchically. Do you see wiggle room here where I do not?
FWIW, I also don’t really think this is the core claim of the sequence. I would want that to be something more like here is a useful framework for moving from point A (where the field is now) to point B (where the field ultimately wants to end up). I have not seen a highly compelling presentation of this sort of thing before, and I think it is very valuable in solving any hard problem to have a general end-to-end plan (which we probably will want to update as we go along; see Robert’s comment).
I think most of the strategies in MIRI’s general cluster do not depend on most of these questions.
Would you mind giving a specific example of an end-to-end AGI safety research agenda that you think does not depend on or attempt to address these questions? (I’m also happy to just continue this discussion off of LW, if you’d like.)
Would you mind giving a specific example of an end-to-end AGI safety research agenda that you think does not depend on or attempt to address these questions?
I think restricting oneself to end-to-end agendas is itself a mistake. One principle of e.g. the MIRI agenda is that we do not currently possess a strong enough understanding to create an end-to-end agenda which has any hope at all of working; anything which currently claims to be an end-to-end agenda is probably just ignoring the hard parts of the problem. (The Rocket Alignment Problem gives a good explanation of this view.)
I do think that finding necessary subquestions, or noticing that a given subquestion may not be necessary, is much easier than figuring out an end-to-end agenda. One can notice that e.g. an architecture-agnostic alignment strategy seems plausible (or arguably even necessary!) without figuring out all the steps of an end-to-end strategy.
Definitely agree that if we silo ourselves into any rigid plan now, it almost certainly won’t work. However, I don’t think ‘end-to-end agenda’ = ‘rigid plan.’ I certainly don’t think this sequence advocates anything like a rigid plan. These are the most general questions I could imagine guiding the field, and I’ve already noted that I think this should be a dynamic draft.
...we do not currently possess a strong enough understanding to create an end-to-end agenda which has any hope at all of working; anything which currently claims to be an end-to-end agenda is probably just ignoring the hard parts of the problem.
What hard parts of the problem do you think this sequence ignores?
(I explicitly claim throughout the sequence that what I propose is not sufficient, so I don’t think I can be accused of ignoring this.)
Hate to just copy and paste, but I still really don’t see how it could be any other way: if we want to avoid futures in which AGI does bad stuff, then we need to think about avoiding (Q3/Q4) the bad stuff (Q2) that AGI (Q1) might do (and we have to do this all “before the deadline;” Q5). This is basically tautological as far as I can tell. Do you agree or disagree with this if-then statement?
I do think that finding necessary subquestions, or noticing that a given subquestion may not be necessary, is much easier than figuring out an end-to-end agenda.
Agreed. My goal was to enumerate these questions. When I noticed that they followed a fairly natural progression, I decided to frame them hierarchically. And, I suppose to your point, it wasn’t necessarily easy to write this all up. I thought it would nonetheless be valuable to do so, so I did!
Thanks for linking the Rocket Alignment Problem—looking forward to giving it a closer read.
I still really don’t see how it could be any other way: if we want to avoid futures in which AGI does bad stuff, then we need to think about avoiding (Q3/Q4) the bad stuff (Q2) that AGI (Q1) might do (and we have to do this all “before the deadline;” Q5). This is basically tautological as far as I can tell. Do you agree or disagree with this if-then statement?
My comment at the top of this thread detailed my disagreement with that if-then statement, and I do not think any of your responses to my top-level comment actually justified the claim of necessity of the questions. Most of them made the same mistake, which I tried to emphasize in my response. This, for example:
How can you be so sure, however, that zooming into something narrower would effectively only add noise?
The question is not “How can John be so sure that zooming into something narrower would only add noise?”, the question is “How can Cameron be so sure that zooming into something narrower would yield crucial information without which we have no realistic hope of solving the problem?”.
I think this same issue applies to most of the rest of your replies to my original comment.
The question is not “How can John be so sure that zooming into something narrower would only add noise?”, the question is “How can Cameron be so sure that zooming into something narrower would yield crucial information without which we have no realistic hope of solving the problem?”.
I am not ‘so sure’—as I said in the previous comment, I have only claim(ed) it is probably necessary to, for instance, know more about AGI than just whether it is a ‘generic strong optimizer.’ I would only be comfortable making non-probabilistic claims about the necessity of particular questions in hindsight.
I don’t think I’m making some silly logical error. If your question is, “Why does Cameron think it is probably necessary to understand X if we want to have any realistic hope of solving the problem?”, well, I do not think this is rhetorical! I spend an entire post defending and elucidating each of these questions, and I hope by the end of the sequence, readers would have a very clear understanding of why I think each is probably necessary to think about (or I have failed as a communicator!).
It was never my goal to defend the (probable) necessity of each of the questions in this one post—this is the point of the wholesequence! This post is a glorified introductory paragraph.
I do not think, therefore, that this post serves as anything close to an adequate defense of this framework, and I understand your skepticism if you think this is all I will say about why these questions are important.
However, I don’t think your original comment—or any of this thread, for that matter—really addresses any of the important claims put forward in this sequence (which makes sense, given that I haven’t even published the whole thing yet!). It also seems like some of your skepticism is being fueled by assumptions about what you predict I will argue as opposed to what I will actually argue (correct me if I’m wrong!).
I hope you can find the time to actually read through the whole thing once it’s published before passing your final judgment. Taken as a whole, I think the sequence speaks for itself. If you still think it’s fundamentally bullshit after having read it, fair enough :)
Your core claim is that all of these five questions need to be answered to minimize AI X-risk. Not only do I disagree with this, I claim that zero of these questions need to be answered to minimize AI X-risk.
Let’s go through them in order...
My mainline vision for a theory of alignment and agency would be sort of analogous to thermodynamics. Thermodynamics does not care about what architecture we use for our heat engines. Rather, it establishes the universal constraints which apply to all possible heat engines. (… or at least all heat engines which work with more-than-exponentially-tiny-probability.) Likewise, I want a theory of alignment and agency which establishes the universal constraints which apply to all agents (or at least all agents which “work” with more-than-exponentially-tiny-probability).
Why would we expect to be able to find such a theory? One argument: we don’t expect that the alignment problem itself is highly-architecture dependent; it’s a fairly generic property of strong optimization. So, “generic strong optimization” looks like roughly the right level of generality at which to understand alignment. (This is not the only argument for our ability to find such a theory, but it’s a relatively simple one which doesn’t need a lot of foundations.) Trying to zoom in on something narrower than that would add a bunch of extra constraints which are effectively “noise”, for purposes of understanding alignment.
On top of that, there’s the obvious problem that if we try to solve alignment for a particular architecture, it’s quite probable that some other architecture will come along and all our work will be obsolete. (At the current pace of ML progress, this seems to happen roughly every 5 years.)
Put all that together, and I think this question is not only unnecessary, but plausibly actively harmful as a guide for alignment research.
(I also note that you have a whole section in your post on question 2 which correctly identifies most of the points I just made; all it’s missing is the step of “oh, maybe we just don’t actually need to know about the details of the architecture at all”.)
I also think these two together are potentially actively harmful. I think the best explanation of this view is currently Yudkowsky’s piece on Security Mindset; “figure out the most likely bad outcomes and then propose solutions to minimize these bad outcomes” is exactly what he’s arguing against. One sentence summary: it’s the unknown unknowns that kill us. The move we want is not “brainstorm failure modes and then avoid the things we brainstormed”, it’s “figure out what we want and then come up with a strategy which systematically achieves it (automatically ruling out huge swaths of failure modes simultaneously)”.
Setting aside that I don’t agree with the “control proposals” framing, this question comes the closest to being actually necessary. Certainly we’ll need implementations of something at some point.
On the other hand, starting from where we are now, I expect implementation to be relatively easy once we have any clue at all what to implement. So even if it’s technically necessary to answer at some point, this question might not be very useful to think about ahead of time. We could solve the problem to a point where AI risk is minimized without necessarily putting significant thought into implementation proposals, especially if the core math ends being obviously-tractable. (Though, to be clear, I don’t think that’s a good idea; trying to build a great edifice of theory without empirical feedback of some kind is rarely useful in practice.)
What is the predicted timeline for the development of AGI?
Personally, I consider timelines approximately-irrelevant for my research plans. Whatever the probable-shortest-path is to aligned AI, that’s the path to follow, regardless of how long we have.
The case for timeline-relevance is usually “well, if we don’t have any hope of properly solving the problem in time, then maybe we need a hail Mary”. That’s a valid argument in principle, but in practice, when we multiply together probability-of-hail-Mary-actually-working vs probability-that-AI-is-coming-that-soon, I expect that number to basically-never favor the hail Mary. It would require too high a probability of the Hail Mary working, and too little uncertainty about AGI being right around the corner.
Now, I do expect other people to disagree with that argument (mainly because they have less hope about solving alignment anytime soon without a Hail Mary). But remember that the post’s original claim is that timeline estimates are necessary for alignment, which seems like far too strong a claim when I’m sitting here with an at-least-internally-coherent view in which timelines are mostly irrelevant.
More Generally...
Zooming out a level, I think the methodology used to generate these questions is flawed. If you want to identify necessary subquestions, then the main way I know how to do that is to consider a wide variety of approaches, and look for subquestions which are clearly crucial to all of them. Then, try to generate further approaches which circumvent those subquestions, and that counterexample-search-process will probably make clear why the subquestions are necessary.
When I imagine what process would generate the questions in this post, I imagine starting with one single approach, looking for subquestions which are clearly crucial to that one approach, and then trying to come up with arguments that those subquestions are necessary (without really searching for necessity-counterexamples to stress-test those arguments).
If I’ve mischaracterized your process, then I apologize in advance, but currently this hypothesis seems pretty likely.
My recommendation is to go find some entirely different approaches, look for patterns which hold up across approaches, and consider what underlying features of the problem generate those patterns.
On The Bright Side
Complaining aside, you’ve clearly correctly understood that the subquestions need to be necessary subquestions in order to form a paradigm; that necessity is what allows the paradigm to generalize across the work done by many different people.
I do think that insight is the rate-limiting factor for most people explicitly trying to come up with paradigms. So well done there! I think you’re already past the biggest barrier. The next few barriers will involve a lot of frustrating work, a lot of coming up with frameworks which seem good to you only to have other people shoot holes in them, but I think you are probably capable of doing it if you decide to pursue it for a while.
Thanks for taking the time to write up your thoughts! I appreciate your skepticism. Needless to say, I don’t agree with most of what you’ve written—I’d be very curious to hear if you think I’m missing something:
Surely understanding generic strong optimization is necessary for alignment (as I also spend most of Q1 discussing). How can you be so sure, however, that zooming into something narrower would effectively only add noise? You assert this, but this doesn’t seem at all obvious to me. I write in Q2: “It is also worth noting immediately that even if particular [alignment problems] are architecture-independent [your point!], it does not necessarily follow that the optimal control proposals for minimizing those risks would also be architecture-independent! For example, just because an SL-based AGI and an RL-based AGI might both hypothetically display tendencies towards instrumental convergence does not mean that the way to best prevent this outcome in the SL AGI would be the same as in the RL AGI.”
By analogy, consider the more familiar ‘alignment problem’ of training dogs (i.e., getting the goals of dogs to align with the goals of their owners). Surely there are ‘breed-independent’ strategies for doing this, but it is not obvious that these strategies will be sufficient for every breed—e.g., Afghan Hounds are apparently way harder to train, than, say, Golden Retrievers. So in addition to the generic-dog-alignment-regime, Afghan hounds require some additional special training to ensure they’re aligned. I don’t yet understand why you are confident that different possible AGIs could not follow this same pattern.
I think that you think that I mean something far more specific than I actually do when I say “particular architecture,” so I don’t think this accurately characterizes what I believe. I describe my view in the next post.
I think this is a very interesting point (and I have not read Eliezer’s post yet, so I am relying on your summary), but I don’t see what the point of AGI safety research is if we take this seriously. If the unknown unknowns will kill us, how are we to avoid them even in theory? If we can articulate some strategy for addressing them, they are not unknown unknowns; they are “increasingly-known unknowns!”
I spent the entire first post of this sequence devoted to “figuring out what we want” (we = AGI safety researchers). It seems like what we want is to avoid AGI-induced existential risks. (I am curious if you think this is wrong?) If so, I claim, here is a “strategy that might systematically achieve this:” we need to understand what we mean when we say AGI (Q1), figure out what risks are likely to emerge from AGI (Q2), mitigate these risks (Q3), and implement these mitigation strategies (Q4).
If by “figure out what we want,” you mean “figure out what we want out of an AGI,” I definitely agree with this (see Robert’s great comment below!). If by “figure out what we want,” you mean “figure out what we want out of AGI safety research,” well, that is the entire point of this sequence!
I completely disagree with this. It will definitely depend on the competitiveness of the relevant proposals, the incentives of the people who have control over the AGI, and a bunch of other stuff that I discuss in Q4 (which hasn’t even been published yet—I hope you’ll read it!).
When you frame it this way, I completely agree. However, there is definitely a continuous space of plausible timelines between “all-the-time-in-the-world” and “hail-Mary,” and I think the probabilities of success [P(success|timeline) * P(timeline)] fluctuate non-obviously across this spectrum. Again, I hope you will withhold your final judgment of my claim until you see how I defend it in Q5! (I suppose my biggest regret in posting this sequence is that I didn’t just do it all at once.)
I think this is a bit uncharitable. I have worked with and/or talked to lots of different AGI safety researchers over the past few months, and this framework is the product of my having “consider[ed] a wide variety of approaches, and look for subquestions which are clearly crucial to all of them.” Take, for instance, this chart in Q1—I am proposing a single framework for talking about AGI that potentially unifies brain-based vs. prosaic approaches. That seems like a useful and productive thing to be doing at the paradigm-level.
I definitely agree that things like how we define ‘control’ and ‘bad outcomes’ might differ between approaches, but I do claim that every approach I have encountered thus far operates using the questions I pose here (e.g., every safety approach cares about AGI architectures, bad outcomes, control, etc. of some sort). To test this claim, I would very much appreciate the presentation of a counterexample if you think you have one!
Thanks again for your comment, and I definitely want to flag that, in spite of disagreeing with it in the ways I’ve tried to describe above, I really do appreciate your skepticism and engagement with this sequence (I cite your preparadigmatic claim a number of times in it).
As I said to Robert, I hope this sequence is read as something much more like a dynamic draft of a theoretical framework than my Permanent Thoughts on Paradigms for AGI Safety™.
I mean, I don’t actually need to defend the assertion all that much. Your core claim is that these questions are necessary, and therefore the burden is on you to argue not only that zooming in on something narrower might not just add noise, but that zooming in on something narrower will not just add noise. If it’s possible that we could get to a point where AGI is no longer a serious threat without needing to answer the question, then the question is not necessary.
Also, regarding the Afghan hound example, I’d guess (without having read anything about the subject) that training Afghan hounds does not actually involve qualitatively different methods than training other dogs, they just need more of the same training and/or perform less well with the same level of training. Not that that’s particularly central. The more important part is that I do not need to be confident that “different possible AGIs could not follow this same pattern”; you’ve taken upon yourself the burden of arguing that different possible AGIs must follow this pattern, otherwise question 1 might not be necessary.
That is basically what I mean, yes. I strongly recommend the Yudkowsky piece.
Remember that if you want to argue necessity of the question, then it’s not enough for these inputs to be relevant to the outcome of AGI, you need to argue that the question must be answered in order for AGI to go well. Just because some factors are relevant to the outcome does not mean that we must know those factors in advance in order to robustly achieve a good outcome.
Remember that if you want to argue necessity of the question, it is not enough for you to think that the probabilities fluctuate; you need a positive argument that the probabilities must fluctuate across the spectrum, by enough that the question must be addressed.
I think most of the strategies in MIRI’s general cluster do not depend on most of these questions.
Agreed, this seems like a good definition for rendering anything as ‘necessary.’
Our goal: minimize AGI-induced existential threats (right?).
My claim is that answering these questions is probably necessary for achieving this goal—i.e., P(achieving goal | failing to think about one or more of these questions) ≈ 0. (I say, “I am claiming that a research agenda that neglects these questions would probably not actually be viable for the goal of AGI safety work.”)
That is, we would be exceedingly lucky if we achieve AGI safety’s goal without thinking about
what we mean when we say AGI (Q1),
what existential risks are likely to emerge from AGI (Q2),
how to address these risks (Q3),
how to implement these mitigation strategies (Q4), and
how quickly we actually need to answer these questions (Q5).
I really don’t see how it could be any other way: if we want to avoid futures in which AGI does bad stuff, we need to think about avoiding (Q3/Q4) the bad stuff (Q2) that AGI (Q1) might do (and we have to do this all “before the deadline;” Q5). I propose a way to do this hierarchically. Do you see wiggle room here where I do not?
FWIW, I also don’t really think this is the core claim of the sequence. I would want that to be something more like here is a useful framework for moving from point A (where the field is now) to point B (where the field ultimately wants to end up). I have not seen a highly compelling presentation of this sort of thing before, and I think it is very valuable in solving any hard problem to have a general end-to-end plan (which we probably will want to update as we go along; see Robert’s comment).
Would you mind giving a specific example of an end-to-end AGI safety research agenda that you think does not depend on or attempt to address these questions? (I’m also happy to just continue this discussion off of LW, if you’d like.)
I think restricting oneself to end-to-end agendas is itself a mistake. One principle of e.g. the MIRI agenda is that we do not currently possess a strong enough understanding to create an end-to-end agenda which has any hope at all of working; anything which currently claims to be an end-to-end agenda is probably just ignoring the hard parts of the problem. (The Rocket Alignment Problem gives a good explanation of this view.)
I do think that finding necessary subquestions, or noticing that a given subquestion may not be necessary, is much easier than figuring out an end-to-end agenda. One can notice that e.g. an architecture-agnostic alignment strategy seems plausible (or arguably even necessary!) without figuring out all the steps of an end-to-end strategy.
Definitely agree that if we silo ourselves into any rigid plan now, it almost certainly won’t work. However, I don’t think ‘end-to-end agenda’ = ‘rigid plan.’ I certainly don’t think this sequence advocates anything like a rigid plan. These are the most general questions I could imagine guiding the field, and I’ve already noted that I think this should be a dynamic draft.
What hard parts of the problem do you think this sequence ignores?
(I explicitly claim throughout the sequence that what I propose is not sufficient, so I don’t think I can be accused of ignoring this.)
Hate to just copy and paste, but I still really don’t see how it could be any other way: if we want to avoid futures in which AGI does bad stuff, then we need to think about avoiding (Q3/Q4) the bad stuff (Q2) that AGI (Q1) might do (and we have to do this all “before the deadline;” Q5). This is basically tautological as far as I can tell. Do you agree or disagree with this if-then statement?
Agreed. My goal was to enumerate these questions. When I noticed that they followed a fairly natural progression, I decided to frame them hierarchically. And, I suppose to your point, it wasn’t necessarily easy to write this all up. I thought it would nonetheless be valuable to do so, so I did!
Thanks for linking the Rocket Alignment Problem—looking forward to giving it a closer read.
My comment at the top of this thread detailed my disagreement with that if-then statement, and I do not think any of your responses to my top-level comment actually justified the claim of necessity of the questions. Most of them made the same mistake, which I tried to emphasize in my response. This, for example:
The question is not “How can John be so sure that zooming into something narrower would only add noise?”, the question is “How can Cameron be so sure that zooming into something narrower would yield crucial information without which we have no realistic hope of solving the problem?”.
I think this same issue applies to most of the rest of your replies to my original comment.
I am not ‘so sure’—as I said in the previous comment, I have only claim(ed) it is probably necessary to, for instance, know more about AGI than just whether it is a ‘generic strong optimizer.’ I would only be comfortable making non-probabilistic claims about the necessity of particular questions in hindsight.
I don’t think I’m making some silly logical error. If your question is, “Why does Cameron think it is probably necessary to understand X if we want to have any realistic hope of solving the problem?”, well, I do not think this is rhetorical! I spend an entire post defending and elucidating each of these questions, and I hope by the end of the sequence, readers would have a very clear understanding of why I think each is probably necessary to think about (or I have failed as a communicator!).
It was never my goal to defend the (probable) necessity of each of the questions in this one post—this is the point of the whole sequence! This post is a glorified introductory paragraph.
I do not think, therefore, that this post serves as anything close to an adequate defense of this framework, and I understand your skepticism if you think this is all I will say about why these questions are important.
However, I don’t think your original comment—or any of this thread, for that matter—really addresses any of the important claims put forward in this sequence (which makes sense, given that I haven’t even published the whole thing yet!). It also seems like some of your skepticism is being fueled by assumptions about what you predict I will argue as opposed to what I will actually argue (correct me if I’m wrong!).
I hope you can find the time to actually read through the whole thing once it’s published before passing your final judgment. Taken as a whole, I think the sequence speaks for itself. If you still think it’s fundamentally bullshit after having read it, fair enough :)