A robust agent-agnostic process (RAAP) is a process that robustly leads to an outcome, without being very sensitive to the details of exactly which agents participate in the process, or how they work. This is illustrated through a “Production Web” failure story, which roughly goes as follows:
A breakthrough in AI technology leads to a wave of automation of $JOBTYPE (e.g management) jobs. Any companies that don’t adopt this automation are outcompeted, and so soon most of these jobs are completely automated. This leads to significant gains at these companies and higher growth rates. These semi-automated companies trade amongst each other frequently, and a new generation of “precision manufacturing″ companies arise that can build almost anything using robots given the right raw materials. A few companies develop new software that can automate $OTHERJOB (e.g. engineering) jobs. Within a few years, nearly all human workers have been replaced.
These companies are now roughly maximizing production within their various industry sectors. Lots of goods are produced and sold to humans at incredibly cheap prices. However, we can’t understand how exactly this is happening. Even Board members of the fully mechanized companies can’t tell whether the companies are serving or merely appeasing humanity; government regulators have no chance.
We do realize that the companies are maximizing objectives that are incompatible with preserving our long-term well-being and existence, but we can’t do anything about it because the companies are both well-defended and essential for our basic needs. Eventually, resources critical to human survival but non-critical to machines (e.g., arable land, drinking water, atmospheric oxygen…) gradually become depleted or destroyed, until humans can no longer survive.
Notice that in this story it didn’t really matter what job type got automated first (nor did it matter which specific companies took advantage of the automation). This is the defining feature of a RAAP—the same general story arises even if you change around the agents that are participating in the process. In particular, in this case competitive pressure to increase production acts as a “control loop” that ensures the same outcome happens, regardless of the exact details about which agents are involved.
Both the previous story and this one seem quite similar to each other, and seem pretty reasonable to me as a description of one plausible failure mode we are aiming to avert. The previous story tends to frame this more as a failure of humanity’s coordination, while this one frames it (in the title) as a failure of intent alignment. It seems like both of these aspects greatly increase the plausibility of the story, or in other words, if we eliminated or made significantly less bad either of the two failures, then the story would no longer seem very plausible.
A natural next question is then which of the two failures would be best to intervene on, that is, is it more useful to work on intent alignment, or working on coordination? I’ll note that my best guess is that for any given person, this effect is minor relative to “which of the two topics is the person more interested in?”, so it doesn’t seem hugely important to me. Nonetheless, my guess is that on the current margin, for technical research in particular, holding all else equal, it is more impactful to focus on intent alignment. You can see a much more vigorous discussion in e.g. [this comment thread](https://www.alignmentforum.org/posts/LpM3EAakwYdS6aRKf/what-multipolar-failure-looks-like-and-robust-agent-agnostic?commentId=3czsvErCYfvJ6bBwf).
The previous story tends to frame this more as a failure of humanity’s coordination, while this one frames it (in the title) as a failure of intent alignment. It seems like both of these aspects greatly increase the plausibility of the story, or in other words, if we eliminated or made significantly less bad either of the two failures, then the story would no longer seem very plausible.
Yes, I agree with this.
A natural next question is then which of the two failures would be best to intervene on, that is, is it more useful to work on intent alignment, or working on coordination? I’ll note that my best guess is that for any given person, this effect is minor relative to “which of the two topics is the person more interested in?”, so it doesn’t seem hugely important to me.
Yes! +10 to this! For some reason when I express opinions of the form “Alignment isn’t the most valuable thing on the margin”, alignment-oriented folks (e.g., Paul here) seem to think I’m saying you shouldn’t work on alignment (which I’m not), which triggers a “Yes, this is the most valuable thing” reply. I’m trying to say “Hey, if you care about AI x-risk, alignment isn’t the only game in town”, and staking some personal reputation points to push against the status quo where almost-everyone x-risk oriented will work on alignment almost-nobody x-risk-oriented will work on cooperation/coordination or multi/multi delegation.
Perhaps I should start saying “Guys, can we encourage folks to work on both issues please, so that people who care about x-risk have more ways to show up and professionally matter?”, and maybe that will trigger less pushback of the form “No, alignment is the most important thing”…
For some reason when I express opinions of the form “Alignment isn’t the most valuable thing on the margin”, alignment-oriented folks (e.g., Paul here) seem to think I’m saying you shouldn’t work on alignment
In fairness, writing “marginal deep-thinking researchers [should not] allocate themselves to making alignment […] cheaper/easier/better” is pretty similar to saying “one shouldn’t work on alignment.”
(I didn’t read you as saying that Paul or Rohin shouldn’t work on alignment, and indeed I’d care much less about that than about a researcher at CHAI arguing that CHAI students shouldn’t work on alignment.)
On top of that, in your prior post you make stronger claims:
“Contributions to OODR research are not particularly helpful to existential safety in my opinion.”
“Contributions to preference learning are not particularly helpful to existential safety in my opinion”
“In any case, I see AI alignment in turn as having two main potential applications to existential safety:” (excluding the main channel Paul cares about and argues for, namely that making alignment easier improves the probability that the bulk of deployed ML systems are aligned and reduces the competitive advantage for misaligned agents)
In the current post you (mostly) didn’t make claims about the relative value of different areas, and so I was (mostly) objecting to arguments that I consider misleading or incorrect. But you appeared to be sticking with the claims from your prior post and so I still ascribed those views to you in a way that may have colored my responses.
maybe that will trigger less pushback of the form “No, alignment is the most important thing”…
I’m not really claiming that AI alignment is the most important thing to work on (though I do think it’s among the best ways to address problems posed by misaligned AI systems in particular). I’m generally supportive of and excited about a wide variety of approaches to improving society’s ability to cope with future challenges (though multi-agent RL or computational social choice would not be near the top of my personal list).
Perhaps I should start saying “Guys, can we encourage folks to work on both issues please, so that people who care about x-risk have more ways to show up and professionally matter?”, and maybe that will trigger less pushback of the form “No, alignment is the most important thing”…
I think that probably would be true.
For some reason when I express opinions of the form “Alignment isn’t the most valuable thing on the margin”, alignment-oriented folks (e.g., Paul here) seem to think I’m saying you shouldn’t work on alignment (which I’m not), which triggers a “Yes, this is the most valuable thing” reply.
Fwiw my reaction is not “Critch thinks Rohin should do something else”, it’s more like “Critch is saying something I believe to be false on an important topic that lots of other people will read”. I generally want us as a community to converge to true beliefs on important things (part of my motivation for writing a newsletter) and so then I’d say “but actually alignment still seems like the most valuable thing on the margin because of X, Y and Z”.
(I’ve had enough conversations with you at this point to know the axes of disagreement, and I think you’ve convinced me that “which one is better on the margin” is not actually that important a question to get an answer to. So now I don’t feel as much of an urge to respond that way. But that’s how I started out.)
Planned summary for the Alignment Newsletter:
Planned opinion (shared with Another (outer) alignment failure story):
Yes, I agree with this.
Yes! +10 to this! For some reason when I express opinions of the form “Alignment isn’t the most valuable thing on the margin”, alignment-oriented folks (e.g., Paul here) seem to think I’m saying you shouldn’t work on alignment (which I’m not), which triggers a “Yes, this is the most valuable thing” reply. I’m trying to say “Hey, if you care about AI x-risk, alignment isn’t the only game in town”, and staking some personal reputation points to push against the status quo where almost-everyone x-risk oriented will work on alignment almost-nobody x-risk-oriented will work on cooperation/coordination or multi/multi delegation.
Perhaps I should start saying “Guys, can we encourage folks to work on both issues please, so that people who care about x-risk have more ways to show up and professionally matter?”, and maybe that will trigger less pushback of the form “No, alignment is the most important thing”…
In fairness, writing “marginal deep-thinking researchers [should not] allocate themselves to making alignment […] cheaper/easier/better” is pretty similar to saying “one shouldn’t work on alignment.”
(I didn’t read you as saying that Paul or Rohin shouldn’t work on alignment, and indeed I’d care much less about that than about a researcher at CHAI arguing that CHAI students shouldn’t work on alignment.)
On top of that, in your prior post you make stronger claims:
“Contributions to OODR research are not particularly helpful to existential safety in my opinion.”
“Contributions to preference learning are not particularly helpful to existential safety in my opinion”
“In any case, I see AI alignment in turn as having two main potential applications to existential safety:” (excluding the main channel Paul cares about and argues for, namely that making alignment easier improves the probability that the bulk of deployed ML systems are aligned and reduces the competitive advantage for misaligned agents)
In the current post you (mostly) didn’t make claims about the relative value of different areas, and so I was (mostly) objecting to arguments that I consider misleading or incorrect. But you appeared to be sticking with the claims from your prior post and so I still ascribed those views to you in a way that may have colored my responses.
I’m not really claiming that AI alignment is the most important thing to work on (though I do think it’s among the best ways to address problems posed by misaligned AI systems in particular). I’m generally supportive of and excited about a wide variety of approaches to improving society’s ability to cope with future challenges (though multi-agent RL or computational social choice would not be near the top of my personal list).
I think that probably would be true.
Fwiw my reaction is not “Critch thinks Rohin should do something else”, it’s more like “Critch is saying something I believe to be false on an important topic that lots of other people will read”. I generally want us as a community to converge to true beliefs on important things (part of my motivation for writing a newsletter) and so then I’d say “but actually alignment still seems like the most valuable thing on the margin because of X, Y and Z”.
(I’ve had enough conversations with you at this point to know the axes of disagreement, and I think you’ve convinced me that “which one is better on the margin” is not actually that important a question to get an answer to. So now I don’t feel as much of an urge to respond that way. But that’s how I started out.)
Got it, thanks!