Thanks for the response. I realize this kind of conversation can be annoying (but I think it’s important). [I’ve included various links below, but they’re largely intended for readers-that-aren’t-you]
I don’t see why this isn’t a fully general counterargument to alignment work. Your argument sounds to me like “there will always be some existentially risky failures left, so if we proceed we will get doom. Therefore, we should avoid solving some failures, because those failures could help build political will to shut it all down”.
(Thanks for this too. I don’t endorse that description, but it’s genuinely useful to know what impression I’m creating—particularly when it’s not what I intended)
I’d rephase my position as:
There’ll always be some risk of existential failure. We want to reduce the total risk. Each step we take forward is a tradeoff: we accept some risk on that step, and (hopefully) reduce future risk by a greater amount.
Risk might be reduced through:
Increased understanding.
Clear evidence that our current understanding isn’t sufficient. (helpful for political will, coordination...)
[various other things]
I am saying “we might get doom”.
I think the odds are high primarily because I don’t expect we’ll get a [mostly safe setup that greatly reduces our odds of [trying something plausible that causes catastrophe]]. (I may be wrong here—hopefully so).
I am not saying “we should not do safety work”; I’m saying “risk compensation needs to be a large factor in deciding which safety work to do”, and “I think few researchers take risk compensation sufficiently seriously”.
To a first approximation, I’d say the odds of [safety work on x turns out negative due to risk compensation] is dependent on how well decision-makers (or ‘experts’ they’re listening to) are tracking what we don’t understand. Specifically, what we might expect to be different from previous experience and/or other domains, and how significant these differences are to outcomes.
Risk is highest when we believe that we understand more than we do understand.
There’s also a unilateralist’s curse issue here: it matters if there are any dangerously overconfident actors in a position to take action that’ll increase risk. (noting that [slightly overconfident] may be [dangerously overconfident] in this context)
This matters in considering the downstream impact of research. I’d be quite a bit less worried about debate research if I only had to consider [what will researchers at least as cautious as Rohin do with this?]. (though still somewhat worried :))
We tend to have a distorted picture of our own understanding, since there’s a strong correlation between [we understand x well] and [we’re able to notice x and express it clearly].
There’s a kind of bias/variance tradeoff here: if we set our evidential standards such that we don’t focus on vague/speculative/indirect/conceptual arguments, we’ll reduce variance, but risk significant sampling bias.
Similarly, conclusions downstream of availability-heuristic-triggered thoughts will tend to be disproportionately influenced by the parts of the problem we understand (at least well enough to formulate clear questions).
I expect that some researchers actively compensate for this in their conscious deliberation, but that it’s very hard to get our intuitions to compensate appropriately.
[[EDIT: Oh, and when considering downstream impact, risk compensation etc, I think it’s hugely important that most decision-makers and decision-making systems have adapted to a world where [those who can build x understand x] holds. A natural corollary here is [the list of known problems that concern [operation of the system itself] is complete].
This implicit assumption underlies risk management approaches, governance structures and individuals’ decision-making processes.
That it’s implicit makes it much harder to address, since there aren’t usually “and here we assume that those who can build x understand x” signposts.
It might be viable to re-imagine risk-management such that this is handled.
It’s much less likely that we get to re-imagine governance structures, or the cognition of individual decision-makers.]]
Again, I don’t think it’s implausible that debate-like approaches come out better than various alternatives after carefully considering risk compensation. I do think it’s a serious error not to spend quite a bit of time and effort understanding and reducing uncertainty on this.
Possibly many researchers do this, but don’t have any clean, legible way to express their process/conclusions. I don’t get that impression: my impression is that arguments along the lines I’m making tend to be perceived as fully general counter-arguments and dismissed (whether they come from outside, or from the researchers themselves).
What’s an example of alignment work that you think is net positive with the theory of change “this is a better way to build future powerful AI systems”?
I’m not sure how direct you intend “this is a better way...” (vs e.g. “this will build the foundation of better ways...”). I guess I’d want to reframe it as “This is a better process by which to build future powerful AI systems”, so as to avoid baking in a level of concreteness before looking at the problem.
I’m not without worries here, and I think “Guaranteed safe”, “Provably safe” are silly, unhelpful overstatements—but the broad direction seems useful.
That said, the part I’m most keen on would be the “safety specification”, and what I’d expect here is something like [more work on this clarifies that it’s a very hard problem we’re not close to solving]. (noting that [this doesn’t directly cause catastrophe] is an empty ‘guarantee’)
This seems net positive, and I think [finding a thousand ways not to build an AI] is part of a good process.
I do still worry that there’s a path of [find concrete problem] → [make concrete problem a target for research] → [fix concrete problems we’ve found, without fixing the underlying issues].
The ideal situation from [mitigate risk compensation] perspective is:
Have as much understanding and clarity of fundamental problems as possible.
Have enough clear, concrete examples to make the need for caution clear to decision-makers.
To the extent that Evan et al achieve (1), I’m unreservedly in favour.
On (2) the situation is a bit less clear: we’re in more danger when there are serious problems for which we’re unable to make a clear/concrete case.
And, again, my conclusion is not “never expose new concrete problems”, but rather “consider the downsides of doing so when picking a particular line of research”.
(I’m probably not going to engage with perspectives that say all current [alignment work towards building safer future powerful AI systems] is net negative, sorry. In my experience those discussions typically don’t go anywhere useful.)
Not a problem. However, I’d want to highlight the distinction between:
Substantial efforts to resolve this uncertainty haven’t worked out.
Resolving this uncertainty isn’t important.
Both are reasonable explanations for not putting much further effort into such discussions. However, I worry that the system-1s of researchers don’t distinguish between (1) and (2) here—so that the impact of concluding (1) will often be to act as if the uncertainty isn’t important (and as if the implicit corollaries hold).
I don’t see an easy way to fix this—but I think it’s an important issue to notice when considering research directions. Not only [consider this uncertainty when picking a research direction], but also [consider that your system 1 is probably underestimating the importance of this uncertainty (and corollaries), since you haven’t been paying much attention to it (for understandable reasons)].
There’ll always be some risk of existential failure.
I am saying “we might get doom”
I am not saying “we should not do safety work”
I’m on board with these.
I’m saying “risk compensation needs to be a large factor in deciding which safety work to do”
I still don’t see why you believe this. Do you agree that in many other safety fields, safety work mostly didn’t think about risk compensation, and still drove down absolute risk? (E.g. I haven’t looked into it but I bet people didn’t spend a bunch of time thinking about risk compensation when deciding whether to include seat belts in cars.)
If you do agree with that, what makes AI different from those cases? (The arguments you give seem like very general considerations that apply to other fields as well.)
Possibly many researchers do this, but don’t have any clean, legible way to express their process/conclusions. I don’t get that impression: my impression is that arguments along the lines I’m making tend to be perceived as fully general counter-arguments and dismissed (whether they come from outside, or from the researchers themselves).
I’d say that the risk compensation argument as given here Proves Too Much and implies that most safety work in most previous fields was net negative, which seems clearly wrong to me. It’s true that as a result I don’t spend lots of time thinking about risk compensation; that still seems correct to me.
It might be viable to re-imagine risk-management such that this is handled.
It seems like your argument here, and in other parts of your comment, is something like “we could do this more costly thing that increases safety even more”. This seems like a pretty different argument; it’s not about risk compensation (i.e. when you introduce safety measures, people do more risky things), but rather about opportunity cost (i.e. when you introduce weak safety measures, you reduce the will to have stronger safety measures). This is fine, but I want to note the explicit change in argument; my earlier comment and the discussion above was not trying to address this argument.
Briefly on opportunity cost arguments, the key factors are (a) how much will is there to pay large costs for safety, (b) how much time remains to do the necessary research and implement it, and (c) how feasible is the stronger safety measure. I am actually more optimistic about both (a) and (b) than what I perceive to be the common consensus amongst safety researchers at AGI labs, but tend to be pretty pessimistic about (c) (at least relative to many LessWrongers, I’m not sure how it compares to safety researchers at AGI labs).
Anyway for now let’s just say that I’ve thought about these three factors and think it isn’t especially realistic to expect that we can get stronger safety measures, and as a result I don’t see opportunity cost as a big reason not to do the safety work we currently do.
I guess I’d want to reframe it as “This is a better process by which to build future powerful AI systems”, so as to avoid baking in a level of concreteness before looking at the problem.
Yeah, I’m not willing to do this. This seems like an instance of the opportunity cost argument, where you try to move to a paradigm that can enable stronger safety measures. See above for my response.
Similarly, the theory of change you cite for your examples seems to be “discovers or clarifies problems that shows that we don’t have a solution” (including for Guaranteed Safe AI and ARC theory, even though in principle those could be about building safe AI systems). So as far as I can tell, the disagreement is really that you think current work that tries to provide a specific recipe for building safe AI systems is net negative, and I think it is net positive.
I certainly agree that it is possible for risk compensation to make safety work net negative, which is all I think you can conclude from that post (indeed the post goes out of its way to say it isn’t arguing for or against any particular work). I disagree that this effect is large enough to meaningfully change the decisions on what work we should do, given the specific work that we typically do (including debate).
if we set our evidential standards such that we don’t focus on vague/speculative/indirect/conceptual arguments
This is a weird hypothetical. The entire field of AI existential safety is focused on speculative, conceptual arguments. (I’m not quite sure what the standard is for “vague” and “indirect” but probably I’d include those adjectives too.)
I think [resolving uncertainty is] an important issue to notice when considering research directions
Why? It doesn’t seem especially action guiding, if we’ve agreed that it’s not high value to try to resolve the uncertainty (which is what I take away from your (1)).
Maybe you’re saying “for various reasons (e.g. unilateralist curse, wisdom of the crowds, coordinating with people with similar goals), you should treat the probability that your work is net negative as higher than you would independently assess, which can affect your prioritization”. I agree that there’s some effect here but my assessment is that it’s pretty small and doesn’t end up changing decisions very much.
[apologies for writing so much; you might want to skip/skim the spoilered bit, since it seems largely a statement of the obvious]
Do you agree that in many other safety fields, safety work mostly didn’t think about risk compensation, and still drove down absolute risk?
Agreed (I imagine there are exceptions, but I’d be shocked if this weren’t usually true).
[I’m responding to this part next, since I think it may resolve some of our mutual misunderstanding]
It seems like your argument here, and in other parts of your comment, is something like “we could do this more costly thing that increases safety even more”. This seems like a pretty different argument; it’s not about risk compensation (i.e. when you introduce safety measures, people do more risky things), but rather about opportunity cost (i.e. when you introduce weak safety measures, you reduce the will to have stronger safety measures). This is fine, but I want to note the explicit change in argument; my earlier comment and the discussion above was not trying to address this argument.
Perhaps we’ve been somewhat talking at cross purposes then, since I certainly consider [not acting to bring about stronger safety measures] to be within the category of [doing more risky things].
It fits the pattern of [lower perceived risk] --> [actions that increase risk].
For clarity, risky actions I’m thinking about would include:
Not pushing for stronger safety measures.
Not pushing for international coordination.
Pushing forward with capabilities more quickly.
Placing too much trust in systems off distribution.
Placing too much trust in the output of [researchers using systems to help solve alignment].
Not spending enough on more principled (by my lights) alignment research.
Some of these might be both [opportunity cost] and [risk compensation]. I.e.:
We did x.
x reduced perceived risk.
A precondition of y was greater-than-current perceived risk.
Now we can’t do y.
[Not doing y] is risky.
If you do agree with that, what makes AI different from those cases? (The arguments you give seem like very general considerations that apply to other fields as well.)
Mostly for other readers, I’m going to spoiler my answer to this: my primary claim is that more people need to think about and answer these questions themselves, so I’d suggest that readers take a little time to do this. I’m significantly more confident that the question is important, than that my answer is correct or near-complete.
I agree that the considerations I’m pointing to are general. I think the conclusions differ, since AI is different.
First a few clarifications:
I’m largely thinking of x-risk, not moderate disasters.
On this basis, I’m thinking about loss-of-control, rather than misuse.
I think if we’re talking about e.g. [misuse leads to moderate disaster], then other fields and prior AI experience become much more reasonable reference classes—and I’d expect things like debate to be net positive here.
My current guess/estimate that debate research is net negative is largely based on x-risk via loss-of-control. (again, I may be wrong—but I’d like researchers to do serious work on answering this question)
How I view AI risk being different:
First another clarification: the risk-increasing actions I’m worried about are not [individual user uses AI incautiously] but things like [AI lab develops/deploys models incautiously], and [governments develop policy incautiously] - so we shouldn’t be thinking of e.g. [people driving faster with seatbelts], but e.g. [car designers / regulators acting differently once seatbelts are a thing].
To be clear, I don’t claim that seatbelts were negative on this basis.
Key differences:
We can build AI without understanding it.
People and systems are not used to this.
Plausible subconscious heuristics that break here:
[I can build x] and [almost no-one understands x better than me], [therefore, I understand x pretty well].
[I can build x] and [I’ve worked with x for years], [therefore I have a pretty good understanding of the potential failure modes].
[We can build x], [therefore we should expect to be able to describe any likely failure modes pretty concretely/rigorously]
[I’ve discussed the risks of x with many experts who build x], [therefore I have a decent high-level understanding of potential failures]
[I’m not afraid], [therefore a deadly threat we’re not ready to deal with must be unlikely]. (I agree it’s unlikely now)
...
One failure of AI can be unrecoverable.
In non-AI cases there tend to be feedback loops that allow adjustments both in the design, and in people’s risk estimates.
Of course such loops exist for some AI failure modes.
This makes the unilateralist’s curse aspect more significant: it’s not the average understanding or caution that matters. High variance is a problem.
Knowing how this impacts risk isn’t straightforward, since it depends a lot on [the levels of coordination we expect] and [the levels of coordination that seem necessary].
My expectation is that we’re just screwed if the most reckless organizations are able to continue without constraint.
Therefore my concern is focused on worlds where we achieve sufficient coordination to make clearly reckless orgs not relevant. (and on making such worlds more likely)
Debate-style approaches seem a good fit for [this won’t actually work, but intelligent, well-informed, non-reckless engineers/decision-makers might think that it will]. I’m concerned about those who are somewhat overconfident.
The most concerning AI is very general.
This leads to a large space of potential failure modes. (both in terms of mechanism, and in terms of behaviour/outcome)
It is hard to make a principled case that we’ve covered all serious failure modes (absent a constructive argument based on various worst-case assumptions).
Pathogens may have the [we don’t really understand them] property, but an individual pathogen isn’t going to be general, or to be doing significant learned optimization. (if and when this isn’t true, then I’d be similarly wary of ad-hoc pathogen safety work)
New generations of AI can often solve qualitatively different problems from previous generations. With each new generation, we’re off distribution in a much more significant sense than with new generations of e.g. car design.
Unknown unknowns may arise internally due to the complex systems nature of AI.
In cases we’re used to, the unknown unknowns tend to come from outside: radically unexpected things may happen to a car, but probably not directly due to the car.
I’m sure I’m missing various relevant factors. Overall, I think straightforward reference classes for [safety impact of risk compensation] tell us approximately nothing—too much is different.
I’m all for looking for a variety of reference classes: we need all the evidence we can get. However, I think people are too ready to fall back on the best reference classes they can find—even when they’re terrible.
Briefly on opportunity cost arguments, the key factors are (a) how much will is there to pay large costs for safety, (b) how much time remains to do the necessary research and implement it, and (c) how feasible is the stronger safety measure.
This seems to miss the obvious: (d) how much safety do the weak vs strong measures get us? I expect that you believe work on debate (and similar weak measures) gets us significantly more than I do. (and hopefully you’re correct!)
Anyway for now let’s just say that I’ve thought about these three factors and think it isn’t especially realistic to expect that we can get stronger safety measures, and as a result I don’t see opportunity cost as a big reason not to do the safety work we currently do.
Given that this seems highly significant, can you:
Quantify “it isn’t especially realistic”—are we talking [15% chance with great effort], or [1% chance with great effort]?
Give a sense of reasons you expect this.
Is [because we have a bunch of work on weak measures] not a big factor in your view?
Or is [isn’t especially realistic] overdetermined, with [less work on weak measures] only helping conditional on removal of other obstacles?
I’d also note that it’s not only [more progress on weak measures] that matters here, but also the signal sent by pursuing these research directions.
If various people with influence on government see [a bunch of lab safety teams pursuing x safety directions] I expect that most will conclude: “There seems to be a decent consensus within the field that x will get us acceptable safety levels”, rather than “Probably x is inadequate, but these researchers don’t see much hope in getting stronger-than-x measures adopted”.
I assume that you personally must have some constraints on the communication strategy you’re able to pursue. However, it does seem highly important that if the safety teams at labs are pursuing agendas based on [this isn’t great, but it’s probably the best we can get], this is clearly and loudly communicated.
Similarly, the theory of change you cite for your examples seems to be “discovers or clarifies problems that shows that we don’t have a solution” (including for Guaranteed Safe AI and ARC theory, even though in principle those could be about building safe AI systems). So as far as I can tell, the disagreement is really that you think current work that tries to provide a specific recipe for building safe AI systems is net negative, and I think it is net positive.
This characterization is a little confusing to me: all of these approaches (ARC / Guaranteed Safe AI / Debate) involve identifying problems, and, if possible, solving/mitigating them. To the extent that the problems can be solved, then the approach contributes to [building safe AI systems]; to the extent that they cannot be solved, the approach contributes to [clarifying that we don’t have a solution].
The reason I prefer GSA / ARC is that I expect these approaches to notice more fundamental problems. I then largely expect them to contribute to [clarifying that we don’t have a solution], since solving the problems probably won’t be practical, and they’ll realize that the AI systems they could build won’t be safe-with-high-probability.
I expect scalable oversight (alone) to notice a smaller set of less fundamental problems—which I expect to be easier to fix/mitigate. I expect the impact to be [building plausibly-safe AI systems that aren’t safe (in any robustly scalable sense)]. Of course I may be wrong, if the fundamental problems I believe exist are mirages (very surprising to me) - or if the indirect [help with alignment research] approach turns out to be more effective than I expect (fairly surprising, but I certainly don’t want to dismiss this).
I do still agree that there’s a significant difference in that GSA/ARC are taking a worst-case-assumptions approach—so in principle they could be too conservative. In practice, I think the worst-case-assumption approach is the principled way to do things given our lack of understanding.
I think [resolvingthe existence of this uncertainty is] an important issue to notice when considering research directions
Why? It doesn’t seem especially action guiding, if we’ve agreed that it’s not high value to try to resolve the uncertainty (which is what I take away from your (1)).
(it’s not obvious to me that it’s not high value to try to resolve this uncertainty, but it’s plausible based on your prior comments on this issue, and I’m stipulating that here)
Here I meant that it’s important to notice that:
The uncertainty exists and is important. (even though resolving it may be impractical)
The fact that you’ve been paying little attention to it does not imply that either:
Your current estimate is accurate.
If your estimate changed, that wouldn’t be highly significant.
I’m saying that, unless people think carefully and deliberately about this, the mind will tend to conclude [my current estimate is pretty accurate] and/or [this variable changing wouldn’t be highly significant] - both of which may be false. To the extent that these are believed, they’re likely to impact other beliefs that are decision-relevant. (e.g. believing that my estimate of x is accurate tells me something about the character of x and my understanding of it)
Not going to respond to everything, sorry, but a few notes:
It fits the pattern of [lower perceived risk] --> [actions that increase risk].
My claim is that for the things you call “actions that increase risk” that I call “opportunity cost”, this causal arrow is very weak, and so you shouldn’t think of it as risk compensation.
E.g. presumably if you believe in this causal arrow you should also believe [higher perceived risk] --> [actions that decrease risk]. But if all building-safe-AI work were to stop today, I think this would have very little effect on how fast the world pushes forward with capabilities.
However, I think people are too ready to fall back on the best reference classes they can find—even when they’re terrible.
I agree that reference classes are often terrible and a poor guide to the future, but often first-principles reasoning is worse (related: 1, 2).
I also don’t really understand the argument in your spoiler box. You’ve listed a bunch of claims about AI, but haven’t spelled out why they should make us expect large risk compensation effects, which I thought was the relevant question.
Quantify “it isn’t especially realistic”—are we talking [15% chance with great effort], or [1% chance with great effort]?
It depends hugely on the specific stronger safety measure you talk about. E.g. I’d be at < 5% on a complete ban on frontier AI R&D (which includes academic research on the topic). Probably I should be < 1%, but I’m hesitant around such small probabilities on any social claim.
For things like GSA and ARC’s work, there isn’t a sufficiently precise claim for me to put a probability on.
Is [because we have a bunch of work on weak measures] not a big factor in your view? Or is [isn’t especially realistic] overdetermined, with [less work on weak measures] only helping conditional on removal of other obstacles?
Not a big factor. (I guess it matters that instruction tuning and RLHF exist, but something like that was always going to happen, the question was when.)
This characterization is a little confusing to me: all of these approaches (ARC / Guaranteed Safe AI / Debate) involve identifying problems, and, if possible, solving/mitigating them. To the extent that the problems can be solved, then the approach contributes to [building safe AI systems];
Hmm, then I don’t understand why you like GSA more than debate, given that debate can fit in the GSA framework (it would be a level 2 specification by the definitions in the paper). You might think that GSA will uncover problems in debate if they exist when using it as a specification, but if anything that seems to me less likely to happen with GSA, since in a GSA approach the specification is treated as infallible.
No worries at all—I was aiming for [Rohin better understands where I’m coming from]. My response was over-long.
E.g. presumably if you believe in this causal arrow you should also believe [higher perceived risk] --> [actions that decrease risk]. But if all building-safe-AI work were to stop today, I think this would have very little effect on how fast the world pushes forward with capabilities.
Agreed, but I think this is too coarse-grained a view. I expect that, absent impressive levels of international coordination, we’re screwed. I’m not expecting [higher perceived risk] --> [actions that decrease risk] to operate successfully on the “move fast and break things” crowd.
I’m considering:
What kinds of people are making/influencing key decisions in worlds where we’re likely to survive?
How do we get those people this influence? (or influential people to acquire these qualities)
What kinds of situation / process increase the probability that these people make risk-reducing decisions?
I think some kind of analysis along these lines makes sense—though clearly it’s hard to know where to draw the line between [it’s unrealistic to expect decision-makers/influencers this principled] and [it’s unrealistic to think things may go well with decision-makers this poor].
I don’t think conditioning on the status-quo free-for-all makes sense, since I don’t think that’s a world where our actions have much influence on our odds of success.
I agree that reference classes are often terrible and a poor guide to the future, but often first-principles reasoning is worse (related: 1, 2).
Agreed (I think your links make good points). However, I’d point out that it can be true both that:
Most first-principles reasoning about x is terrible.
First-principles reasoning is required in order to make any useful prediction of x. (for most x, I don’t think this holds)
You’ve listed a bunch of claims about AI, but haven’t spelled out why they should make us expect large risk compensation effects
I think almost everything comes down to [perceived level of risk] sometimes dropping hugely more than [actual risk] in the case of AI. So it’s about the magnitude of the input.
We understand AI much less well.
We’ll underestimate a bunch of risks, due to lack of understanding.
We may also over-estimate a bunch, but the errors don’t cancel: being over-cautious around fire doesn’t stop us from drowning.
Certain types of research will address [some risks we understand], but fail to address [some risks we don’t see / underestimate].
They’ll then have a much larger impact on [our perception of risk] than on [actual risk].
Drop in perceived risk is much larger than the drop in actual risk.
In most other situations, this isn’t the case, since we have better understanding and/or adaptive feedback loops to correct risk estimates.
It depends hugely on the specific stronger safety measure you talk about. E.g. I’d be at < 5% on a complete ban on frontier AI R&D (which includes academic research on the topic). Probably I should be < 1%, but I’m hesitant around such small probabilities on any social claim.
That’s useful, thanks. (these numbers don’t seem foolish to me—I think we disagree mainly on [how necessary are the stronger measures] rather than [how likely are they])
Hmm, then I don’t understand why you like GSA more than debate, given that debate can fit in the GSA framework (it would be a level 2 specification by the definitions in the paper).
Oh sorry, I should have been more specific—I’m only keen on specifications that plausibly give real guarantees: level 6(?) or 7. I’m only keen on the framework conditional on meeting an extremely high bar for the specification. If that part gets ignored on the basis that it’s hard (which it obviously is), then it’s not clear to me that the framework is worth much.
I suppose I’m also influenced by the way some of the researchers talk about it—I’m not clear how much focus Davidad is currently putting on level 6⁄7 specifications, but he seems clear that they’ll be necessary.
I expect that, absent impressive levels of international coordination, we’re screwed.
This is the sort of thing that makes it hard for me to distinguish your argument from “[regardless of the technical work you do] there will always be some existentially risky failures left, so if we proceed we will get doom. Therefore, we should avoid solving some failures, because those failures could help build political will to shut it all down”.
I agree that, conditional on believing that we’re screwed absent huge levels of coordination regardless of technical work, then a lot of technical work including debate looks net negative by reducing the will to coordinate.
What kinds of people are making/influencing key decisions in worlds where we’re likely to survive?
[...]
I don’t think conditioning on the status-quo free-for-all makes sense, since I don’t think that’s a world where our actions have much influence on our odds of success.
Similarly this only makes sense under a view where technical work can’t have much impact on p(doom) by itself, aka “regardless of technical work we’re screwed”. Otherwise even in a “free-for-all” world, our actions do influence odds of success, because you can do technical work that people use, and that reduces p(doom).
I’m only keen on specifications that plausibly give real guarantees: level 6(?) or 7. I’m only keen on the framework conditional on meeting an extremely high bar for the specification.
Oh, my probability on level 6 or level 7 specifications becoming the default in AI is dominated by my probability that I’m somehow misunderstanding what they’re supposed to be. (A level 7 spec for AGI seems impossible even in theory, e.g. because it requires solving the halting problem.)
If we ignore the misunderstanding part then I’m at << 1% probability on “we build transformative AI using GSA with level 6 / level 7 specifications in the nearish future”.
(I could imagine a pause on frontier AI R&D, except that you are allowed to proceed if you have level 6 / level 7 specifications; and those specifications are used in a few narrow domains. My probability on that is similar to my probability on a pause.)
“[regardless of the technical work you do] there will always be some existentially risky failures left, so if we proceed we get doom...
I’m claiming something more like “[given a realistic degree of technical work on current agendas in the time we have], there will be some existentially risky failures left, so if we proceed we’re highly likely to get doom. I’ll clarify more below.
Otherwise even in a “free-for-all” world, our actions do influence odds of success, because you can do technical work that people use, and that reduces p(doom).
Sure, but I mostly don’t buy p(doom) reduction here, other than through [highlight near-misses] - so that an approach that hides symptoms of fundamental problems is probably net negative. In the free-for-all world, I think doom is overdetermined, absent miracles [1]- and [significantly improved debate setup] does not strike me as a likely miracle, even after I condition on [a miracle occurred].
Factors that push in the other direction:
I can imagine techniques that reduce near-term widespread low-stakes failures.
This may be instrumentally positive if e.g. AI is much better for collective sensemaking than otherwise it would be (even if that’s only [the negative impact isn’t as severe]).
Similarly, I can imagine such techniques mitigating the near-term impact of [we get what we measure] failures. This too seems instrumentally useful.
I do accept that technical work I’m not too keen on may avoid some early foolish/embarrassing ways to fail catastrophically.
I mostly don’t think this helps significantly, since we’ll consistently hit doom later without a change in strategy.
Nonetheless, [don’t be dead yet] is instrumentally useful if we want more time to change strategy, so avoiding early catastrophe is a plus.
[probably other things along similar lines that I’m missing]
But I suppose that on the [usefulness of debate (/scalable oversight techniques generally) research], I’m mainly thinking: [more clearly understanding how and when this may fail catastrophically, and how we’d robustly predict this] seems positive, whereas [show that versions of this technique get higher scores on some benchmarks] probably doesn’t.
Even if I’m wrong about the latter, the former seems more important. Granted, it also seems harder—but I think that having a bunch of researchers focus on it and fail to come up with any principled case is useful too (at least for them).
If we ignore the misunderstanding part then I’m at << 1% probability on “we build transformative AI using GSA with level 6 / level 7 specifications in the nearish future”.
(I could imagine a pause on frontier AI R&D, except that you are allowed to proceed if you have level 6 / level 7 specifications; and those specifications are used in a few narrow domains. My probability on that is similar to my probability on a pause.)
Agreed. This is why my main hope on this routes through [work on level 6⁄7 specifications clarifies the depth and severity of the problem] and [more-formally-specified 6⁄7 specifications give us something to point to in regulation]. (on the level 7, I’m assuming “in all contexts” must be an overstatement; in particular, we only need something like ”...in all contexts plausibly reachable from the current state, given that all powerful AIs developed by us or our AIS follow this specification or this-specification-endorsed specifications”)
Clarifications I’d make on my [doom seems likely, but not inevitable; some technical work seems net negative] position:
If I expected that we had 25 years to get things right, I think I’d be pretty keen on most hands-on technical approaches (debate included).
Quite a bit depends on the type of technical work. I like the kind of work that plausibly has the property [if weiterate on this we’ll probably notice all catastrophic problems before triggering them].
I do think there’s a low-but-non-zero chance of breakthroughs in pretty general technical work. I can’t rule out that ARC theory come up with something transformational in the next few years (or that it comes from some group that’s outside my current awareness).
I’m not ruling out an [AI assistants help us make meaningful alignment progress] path—I currently think it’s unlikely, not impossible.
However, here I note that there’s a big difference between:
The odds that [solve alignment with AI assistants] would work if optimally managed.
The odds that it works in practice.
I worry that researchers doing technical research tend to have the the former in mind (implicitly, subconsciously) - i.e. the (implicit) argument is something like “Our work stands a good chance to unlock a winning strategy here”.
But this is not the question—the question is how likely it is to work in practice.
(even conditioning on not-obviously-reckless people being in charge)
It’s guesswork, but on [does a low-risk winning strategy of this form exist (without a huge slowdown)?] I’m perhaps 25%. On [will we actually find and implement such a strategy, even assuming the most reckless people aren’t a factor], I become quite a bit more pessimistic—if I start to say “10%”, I recoil at the implied [40% shot at finding and following a good enough path if one exists].
Of course a lot here depends on whether we can do well enough to fail safely. Even a 5% shot is obviously great if the other 95% is [we realize it’s not working, and pivot].
However, since I don’t see debate-like approaches as plausible in any direct-path-to-alignment sense, I’d like to see a much clearer plan for using such methods as stepping-stones to (stepping stones to...) a solution.
In particular, I’m interested in the case for [if this doesn’t work, we have principled reasons to believe it’ll fail safely] (as an overall process, that is—not on each individual experiment).
When I look at e.g. Buck/Ryan’s outlined iteration process here,[2] I’m not comforted on this point: this has the same structure as [run SGD on passing our evals], only it’s [run researcher iteration on passing our evals]. This is less bad, but still entirely loses the [evals are an independent check on an approach we have principled reasons to think will work] property.
On some level this kind of loop is unavoidable—but having the “core workflow” of alignment researchers be [tweak the approach, then test it against evals] seems a bit nuts.
Most of the hope here seems to come from [the problem is surprisingly (to me) easy] or [catastrophic failure modes are surprisingly (to me) sparse].
I note that it’s not clear they’re endorsing this iteration process—it may just be that they expect it to be the process, so that it’s important for people to be thinking in these terms.
Okay, I think it’s pretty clear that the crux between us is basically what I was gesturing at in my first comment, even if there are minor caveats that make it not exactly literally that.
I’m probably not going to engage with perspectives that say all current [alignment work towards building safer future powerful AI systems] is net negative, sorry. In my experience those discussions typically don’t go anywhere useful.
That’s fair. I agree that we’re not likely to resolve much by continuing this discussion. (but thanks for engaging—I do think I understand your position somewhat better now)
What does seem worth considering is adjusting research direction to increase focus on [search for and better understand the most important failure modes] - both of debate-like approaches generally, and any [plan to use such techniques to get useful alignment work done].
I expect that this would lead people to develop clearer, richer models. Presumably this will take months rather than hours, but it seems worth it (whether or not I’m correct—I expect that [the understanding required to clearly demonstrate to me that I’m wrong] would be useful in a bunch of other ways).
Thanks for the response. I realize this kind of conversation can be annoying (but I think it’s important).
[I’ve included various links below, but they’re largely intended for readers-that-aren’t-you]
(Thanks for this too. I don’t endorse that description, but it’s genuinely useful to know what impression I’m creating—particularly when it’s not what I intended)
I’d rephase my position as:
There’ll always be some risk of existential failure. We want to reduce the total risk. Each step we take forward is a tradeoff: we accept some risk on that step, and (hopefully) reduce future risk by a greater amount.
Risk might be reduced through:
Increased understanding.
Clear evidence that our current understanding isn’t sufficient. (helpful for political will, coordination...)
[various other things]
I am saying “we might get doom”.
I think the odds are high primarily because I don’t expect we’ll get a [mostly safe setup that greatly reduces our odds of [trying something plausible that causes catastrophe]]. (I may be wrong here—hopefully so).
I am not saying “we should not do safety work”; I’m saying “risk compensation needs to be a large factor in deciding which safety work to do”, and “I think few researchers take risk compensation sufficiently seriously”.
To a first approximation, I’d say the odds of [safety work on x turns out negative due to risk compensation] is dependent on how well decision-makers (or ‘experts’ they’re listening to) are tracking what we don’t understand. Specifically, what we might expect to be different from previous experience and/or other domains, and how significant these differences are to outcomes.
Risk is highest when we believe that we understand more than we do understand.
There’s also a unilateralist’s curse issue here: it matters if there are any dangerously overconfident actors in a position to take action that’ll increase risk. (noting that [slightly overconfident] may be [dangerously overconfident] in this context)
This matters in considering the downstream impact of research. I’d be quite a bit less worried about debate research if I only had to consider [what will researchers at least as cautious as Rohin do with this?]. (though still somewhat worried :))
See also Critch’s thoughts on the need for social models when estimating impact.
We tend to have a distorted picture of our own understanding, since there’s a strong correlation between [we understand x well] and [we’re able to notice x and express it clearly].
There’s a kind of bias/variance tradeoff here: if we set our evidential standards such that we don’t focus on vague/speculative/indirect/conceptual arguments, we’ll reduce variance, but risk significant sampling bias.
Similarly, conclusions downstream of availability-heuristic-triggered thoughts will tend to be disproportionately influenced by the parts of the problem we understand (at least well enough to formulate clear questions).
I expect that some researchers actively compensate for this in their conscious deliberation, but that it’s very hard to get our intuitions to compensate appropriately.
[[EDIT: Oh, and when considering downstream impact, risk compensation etc, I think it’s hugely important that most decision-makers and decision-making systems have adapted to a world where [those who can build x understand x] holds. A natural corollary here is [the list of known problems that concern [operation of the system itself] is complete].
This implicit assumption underlies risk management approaches, governance structures and individuals’ decision-making processes.
That it’s implicit makes it much harder to address, since there aren’t usually “and here we assume that those who can build x understand x” signposts.
It might be viable to re-imagine risk-management such that this is handled.
It’s much less likely that we get to re-imagine governance structures, or the cognition of individual decision-makers.]]
Again, I don’t think it’s implausible that debate-like approaches come out better than various alternatives after carefully considering risk compensation. I do think it’s a serious error not to spend quite a bit of time and effort understanding and reducing uncertainty on this.
Possibly many researchers do this, but don’t have any clean, legible way to express their process/conclusions. I don’t get that impression: my impression is that arguments along the lines I’m making tend to be perceived as fully general counter-arguments and dismissed (whether they come from outside, or from the researchers themselves).
I’m not sure how direct you intend “this is a better way...” (vs e.g. “this will build the foundation of better ways...”). I guess I’d want to reframe it as “This is a better process by which to build future powerful AI systems”, so as to avoid baking in a level of concreteness before looking at the problem.
That said, the following seem good to me:
Davidad’s stuff—e.g. Towards Guaranteed Safe AI.
I’m not without worries here, and I think “Guaranteed safe”, “Provably safe” are silly, unhelpful overstatements—but the broad direction seems useful.
That said, the part I’m most keen on would be the “safety specification”, and what I’d expect here is something like [more work on this clarifies that it’s a very hard problem we’re not close to solving]. (noting that [this doesn’t directly cause catastrophe] is an empty ‘guarantee’)
ARC theory’s stuff.
I’m not sure you’d want to include this??
I’m keen on it because:
It seems likely to expand understanding—to uncover new problems.
It may plausibly be sufficiently general—closer to the [fundamental fix] than the [patch this noticeably undesirable behaviour] end of the spectrum.
Evan’s red-teaming stuff (e.g. model organisms of misalignment).
This seems net positive, and I think [finding a thousand ways not to build an AI] is part of a good process.
I do still worry that there’s a path of [find concrete problem] → [make concrete problem a target for research] → [fix concrete problems we’ve found, without fixing the underlying issues].
The ideal situation from [mitigate risk compensation] perspective is:
Have as much understanding and clarity of fundamental problems as possible.
Have enough clear, concrete examples to make the need for caution clear to decision-makers.
To the extent that Evan et al achieve (1), I’m unreservedly in favour.
On (2) the situation is a bit less clear: we’re in more danger when there are serious problems for which we’re unable to make a clear/concrete case.
And, again, my conclusion is not “never expose new concrete problems”, but rather “consider the downsides of doing so when picking a particular line of research”.
A bunch of foundational stuff you probably also wouldn’t want to include under this heading (singular learning theory, natural abstractions, various agent-foundations-shaped things, perhaps computational mechanics (??))).
Not a problem. However, I’d want to highlight the distinction between:
Substantial efforts to resolve this uncertainty haven’t worked out.
Resolving this uncertainty isn’t important.
Both are reasonable explanations for not putting much further effort into such discussions.
However, I worry that the system-1s of researchers don’t distinguish between (1) and (2) here—so that the impact of concluding (1) will often be to act as if the uncertainty isn’t important (and as if the implicit corollaries hold).
I don’t see an easy way to fix this—but I think it’s an important issue to notice when considering research directions. Not only [consider this uncertainty when picking a research direction], but also [consider that your system 1 is probably underestimating the importance of this uncertainty (and corollaries), since you haven’t been paying much attention to it (for understandable reasons)].
Main points:
I’m on board with these.
I still don’t see why you believe this. Do you agree that in many other safety fields, safety work mostly didn’t think about risk compensation, and still drove down absolute risk? (E.g. I haven’t looked into it but I bet people didn’t spend a bunch of time thinking about risk compensation when deciding whether to include seat belts in cars.)
If you do agree with that, what makes AI different from those cases? (The arguments you give seem like very general considerations that apply to other fields as well.)
I’d say that the risk compensation argument as given here Proves Too Much and implies that most safety work in most previous fields was net negative, which seems clearly wrong to me. It’s true that as a result I don’t spend lots of time thinking about risk compensation; that still seems correct to me.
It seems like your argument here, and in other parts of your comment, is something like “we could do this more costly thing that increases safety even more”. This seems like a pretty different argument; it’s not about risk compensation (i.e. when you introduce safety measures, people do more risky things), but rather about opportunity cost (i.e. when you introduce weak safety measures, you reduce the will to have stronger safety measures). This is fine, but I want to note the explicit change in argument; my earlier comment and the discussion above was not trying to address this argument.
Briefly on opportunity cost arguments, the key factors are (a) how much will is there to pay large costs for safety, (b) how much time remains to do the necessary research and implement it, and (c) how feasible is the stronger safety measure. I am actually more optimistic about both (a) and (b) than what I perceive to be the common consensus amongst safety researchers at AGI labs, but tend to be pretty pessimistic about (c) (at least relative to many LessWrongers, I’m not sure how it compares to safety researchers at AGI labs).
Anyway for now let’s just say that I’ve thought about these three factors and think it isn’t especially realistic to expect that we can get stronger safety measures, and as a result I don’t see opportunity cost as a big reason not to do the safety work we currently do.
Yeah, I’m not willing to do this. This seems like an instance of the opportunity cost argument, where you try to move to a paradigm that can enable stronger safety measures. See above for my response.
Similarly, the theory of change you cite for your examples seems to be “discovers or clarifies problems that shows that we don’t have a solution” (including for Guaranteed Safe AI and ARC theory, even though in principle those could be about building safe AI systems). So as far as I can tell, the disagreement is really that you think current work that tries to provide a specific recipe for building safe AI systems is net negative, and I think it is net positive.
Other smaller points:
I certainly agree that it is possible for risk compensation to make safety work net negative, which is all I think you can conclude from that post (indeed the post goes out of its way to say it isn’t arguing for or against any particular work). I disagree that this effect is large enough to meaningfully change the decisions on what work we should do, given the specific work that we typically do (including debate).
This is a weird hypothetical. The entire field of AI existential safety is focused on speculative, conceptual arguments. (I’m not quite sure what the standard is for “vague” and “indirect” but probably I’d include those adjectives too.)
Why? It doesn’t seem especially action guiding, if we’ve agreed that it’s not high value to try to resolve the uncertainty (which is what I take away from your (1)).
Maybe you’re saying “for various reasons (e.g. unilateralist curse, wisdom of the crowds, coordinating with people with similar goals), you should treat the probability that your work is net negative as higher than you would independently assess, which can affect your prioritization”. I agree that there’s some effect here but my assessment is that it’s pretty small and doesn’t end up changing decisions very much.
[apologies for writing so much; you might want to skip/skim the spoilered bit, since it seems largely a statement of the obvious]
Agreed (I imagine there are exceptions, but I’d be shocked if this weren’t usually true).
[I’m responding to this part next, since I think it may resolve some of our mutual misunderstanding]
Perhaps we’ve been somewhat talking at cross purposes then, since I certainly consider [not acting to bring about stronger safety measures] to be within the category of [doing more risky things].
It fits the pattern of [lower perceived risk] --> [actions that increase risk].
For clarity, risky actions I’m thinking about would include:
Not pushing for stronger safety measures.
Not pushing for international coordination.
Pushing forward with capabilities more quickly.
Placing too much trust in systems off distribution.
Placing too much trust in the output of [researchers using systems to help solve alignment].
Not spending enough on more principled (by my lights) alignment research.
Some of these might be both [opportunity cost] and [risk compensation].
I.e.:
We did x.
x reduced perceived risk.
A precondition of y was greater-than-current perceived risk.
Now we can’t do y.
[Not doing y] is risky.
Mostly for other readers, I’m going to spoiler my answer to this: my primary claim is that more people need to think about and answer these questions themselves, so I’d suggest that readers take a little time to do this. I’m significantly more confident that the question is important, than that my answer is correct or near-complete.
I agree that the considerations I’m pointing to are general.
I think the conclusions differ, since AI is different.
First a few clarifications:
I’m largely thinking of x-risk, not moderate disasters.
On this basis, I’m thinking about loss-of-control, rather than misuse.
I think if we’re talking about e.g. [misuse leads to moderate disaster], then other fields and prior AI experience become much more reasonable reference classes—and I’d expect things like debate to be net positive here.
My current guess/estimate that debate research is net negative is largely based on x-risk via loss-of-control. (again, I may be wrong—but I’d like researchers to do serious work on answering this question)
How I view AI risk being different:
First another clarification: the risk-increasing actions I’m worried about are not [individual user uses AI incautiously] but things like [AI lab develops/deploys models incautiously], and [governments develop policy incautiously] - so we shouldn’t be thinking of e.g. [people driving faster with seatbelts], but e.g. [car designers / regulators acting differently once seatbelts are a thing].
To be clear, I don’t claim that seatbelts were negative on this basis.
Key differences:
We can build AI without understanding it.
People and systems are not used to this.
Plausible subconscious heuristics that break here:
[I can build x] and [almost no-one understands x better than me], [therefore, I understand x pretty well].
[I can build x] and [I’ve worked with x for years], [therefore I have a pretty good understanding of the potential failure modes].
[We can build x], [therefore we should expect to be able to describe any likely failure modes pretty concretely/rigorously]
[I’ve discussed the risks of x with many experts who build x], [therefore I have a decent high-level understanding of potential failures]
[I’m not afraid], [therefore a deadly threat we’re not ready to deal with must be unlikely]. (I agree it’s unlikely now)
...
One failure of AI can be unrecoverable.
In non-AI cases there tend to be feedback loops that allow adjustments both in the design, and in people’s risk estimates.
Of course such loops exist for some AI failure modes.
This makes the unilateralist’s curse aspect more significant: it’s not the average understanding or caution that matters. High variance is a problem.
Knowing how this impacts risk isn’t straightforward, since it depends a lot on [the levels of coordination we expect] and [the levels of coordination that seem necessary].
My expectation is that we’re just screwed if the most reckless organizations are able to continue without constraint.
Therefore my concern is focused on worlds where we achieve sufficient coordination to make clearly reckless orgs not relevant. (and on making such worlds more likely)
Debate-style approaches seem a good fit for [this won’t actually work, but intelligent, well-informed, non-reckless engineers/decision-makers might think that it will]. I’m concerned about those who are somewhat overconfident.
The most concerning AI is very general.
This leads to a large space of potential failure modes. (both in terms of mechanism, and in terms of behaviour/outcome)
It is hard to make a principled case that we’ve covered all serious failure modes (absent a constructive argument based on various worst-case assumptions).
The most concerning AI will be doing some kind of learned optimization. (in a broad, John Wentworthy sense)
Pathogens may have the [we don’t really understand them] property, but an individual pathogen isn’t going to be general, or to be doing significant learned optimization. (if and when this isn’t true, then I’d be similarly wary of ad-hoc pathogen safety work)
New generations of AI can often solve qualitatively different problems from previous generations. With each new generation, we’re off distribution in a much more significant sense than with new generations of e.g. car design.
Unknown unknowns may arise internally due to the complex systems nature of AI.
In cases we’re used to, the unknown unknowns tend to come from outside: radically unexpected things may happen to a car, but probably not directly due to the car.
I’m sure I’m missing various relevant factors.
Overall, I think straightforward reference classes for [safety impact of risk compensation] tell us approximately nothing—too much is different.
I’m all for looking for a variety of reference classes: we need all the evidence we can get.
However, I think people are too ready to fall back on the best reference classes they can find—even when they’re terrible.
This seems to miss the obvious: (d) how much safety do the weak vs strong measures get us?
I expect that you believe work on debate (and similar weak measures) gets us significantly more than I do. (and hopefully you’re correct!)
Given that this seems highly significant, can you:
Quantify “it isn’t especially realistic”—are we talking [15% chance with great effort], or [1% chance with great effort]?
Give a sense of reasons you expect this.
Is [because we have a bunch of work on weak measures] not a big factor in your view?
Or is [isn’t especially realistic] overdetermined, with [less work on weak measures] only helping conditional on removal of other obstacles?
I’d also note that it’s not only [more progress on weak measures] that matters here, but also the signal sent by pursuing these research directions.
If various people with influence on government see [a bunch of lab safety teams pursuing x safety directions] I expect that most will conclude: “There seems to be a decent consensus within the field that x will get us acceptable safety levels”, rather than “Probably x is inadequate, but these researchers don’t see much hope in getting stronger-than-x measures adopted”.
I assume that you personally must have some constraints on the communication strategy you’re able to pursue. However, it does seem highly important that if the safety teams at labs are pursuing agendas based on [this isn’t great, but it’s probably the best we can get], this is clearly and loudly communicated.
This characterization is a little confusing to me: all of these approaches (ARC / Guaranteed Safe AI / Debate) involve identifying problems, and, if possible, solving/mitigating them.
To the extent that the problems can be solved, then the approach contributes to [building safe AI systems]; to the extent that they cannot be solved, the approach contributes to [clarifying that we don’t have a solution].
The reason I prefer GSA / ARC is that I expect these approaches to notice more fundamental problems. I then largely expect them to contribute to [clarifying that we don’t have a solution], since solving the problems probably won’t be practical, and they’ll realize that the AI systems they could build won’t be safe-with-high-probability.
I expect scalable oversight (alone) to notice a smaller set of less fundamental problems—which I expect to be easier to fix/mitigate. I expect the impact to be [building plausibly-safe AI systems that aren’t safe (in any robustly scalable sense)].
Of course I may be wrong, if the fundamental problems I believe exist are mirages (very surprising to me) - or if the indirect [help with alignment research] approach turns out to be more effective than I expect (fairly surprising, but I certainly don’t want to dismiss this).
I do still agree that there’s a significant difference in that GSA/ARC are taking a worst-case-assumptions approach—so in principle they could be too conservative. In practice, I think the worst-case-assumption approach is the principled way to do things given our lack of understanding.
(it’s not obvious to me that it’s not high value to try to resolve this uncertainty, but it’s plausible based on your prior comments on this issue, and I’m stipulating that here)
Here I meant that it’s important to notice that:
The uncertainty exists and is important. (even though resolving it may be impractical)
The fact that you’ve been paying little attention to it does not imply that either:
Your current estimate is accurate.
If your estimate changed, that wouldn’t be highly significant.
I’m saying that, unless people think carefully and deliberately about this, the mind will tend to conclude [my current estimate is pretty accurate] and/or [this variable changing wouldn’t be highly significant] - both of which may be false.
To the extent that these are believed, they’re likely to impact other beliefs that are decision-relevant. (e.g. believing that my estimate of x is accurate tells me something about the character of x and my understanding of it)
Not going to respond to everything, sorry, but a few notes:
My claim is that for the things you call “actions that increase risk” that I call “opportunity cost”, this causal arrow is very weak, and so you shouldn’t think of it as risk compensation.
E.g. presumably if you believe in this causal arrow you should also believe [higher perceived risk] --> [actions that decrease risk]. But if all building-safe-AI work were to stop today, I think this would have very little effect on how fast the world pushes forward with capabilities.
I agree that reference classes are often terrible and a poor guide to the future, but often first-principles reasoning is worse (related: 1, 2).
I also don’t really understand the argument in your spoiler box. You’ve listed a bunch of claims about AI, but haven’t spelled out why they should make us expect large risk compensation effects, which I thought was the relevant question.
It depends hugely on the specific stronger safety measure you talk about. E.g. I’d be at < 5% on a complete ban on frontier AI R&D (which includes academic research on the topic). Probably I should be < 1%, but I’m hesitant around such small probabilities on any social claim.
For things like GSA and ARC’s work, there isn’t a sufficiently precise claim for me to put a probability on.
Not a big factor. (I guess it matters that instruction tuning and RLHF exist, but something like that was always going to happen, the question was when.)
Hmm, then I don’t understand why you like GSA more than debate, given that debate can fit in the GSA framework (it would be a level 2 specification by the definitions in the paper). You might think that GSA will uncover problems in debate if they exist when using it as a specification, but if anything that seems to me less likely to happen with GSA, since in a GSA approach the specification is treated as infallible.
No worries at all—I was aiming for [Rohin better understands where I’m coming from]. My response was over-long.
Agreed, but I think this is too coarse-grained a view.
I expect that, absent impressive levels of international coordination, we’re screwed. I’m not expecting [higher perceived risk] --> [actions that decrease risk] to operate successfully on the “move fast and break things” crowd.
I’m considering:
What kinds of people are making/influencing key decisions in worlds where we’re likely to survive?
How do we get those people this influence? (or influential people to acquire these qualities)
What kinds of situation / process increase the probability that these people make risk-reducing decisions?
I think some kind of analysis along these lines makes sense—though clearly it’s hard to know where to draw the line between [it’s unrealistic to expect decision-makers/influencers this principled] and [it’s unrealistic to think things may go well with decision-makers this poor].
I don’t think conditioning on the status-quo free-for-all makes sense, since I don’t think that’s a world where our actions have much influence on our odds of success.
Agreed (I think your links make good points). However, I’d point out that it can be true both that:
Most first-principles reasoning about x is terrible.
First-principles reasoning is required in order to make any useful prediction of x. (for most x, I don’t think this holds)
I think almost everything comes down to [perceived level of risk] sometimes dropping hugely more than [actual risk] in the case of AI. So it’s about the magnitude of the input.
We understand AI much less well.
We’ll underestimate a bunch of risks, due to lack of understanding.
We may also over-estimate a bunch, but the errors don’t cancel: being over-cautious around fire doesn’t stop us from drowning.
Certain types of research will address [some risks we understand], but fail to address [some risks we don’t see / underestimate].
They’ll then have a much larger impact on [our perception of risk] than on [actual risk].
Drop in perceived risk is much larger than the drop in actual risk.
In most other situations, this isn’t the case, since we have better understanding and/or adaptive feedback loops to correct risk estimates.
That’s useful, thanks. (these numbers don’t seem foolish to me—I think we disagree mainly on [how necessary are the stronger measures] rather than [how likely are they])
Oh sorry, I should have been more specific—I’m only keen on specifications that plausibly give real guarantees: level 6(?) or 7. I’m only keen on the framework conditional on meeting an extremely high bar for the specification.
If that part gets ignored on the basis that it’s hard (which it obviously is), then it’s not clear to me that the framework is worth much.
I suppose I’m also influenced by the way some of the researchers talk about it—I’m not clear how much focus Davidad is currently putting on level 6⁄7 specifications, but he seems clear that they’ll be necessary.
This is the sort of thing that makes it hard for me to distinguish your argument from “[regardless of the technical work you do] there will always be some existentially risky failures left, so if we proceed we will get doom. Therefore, we should avoid solving some failures, because those failures could help build political will to shut it all down”.
I agree that, conditional on believing that we’re screwed absent huge levels of coordination regardless of technical work, then a lot of technical work including debate looks net negative by reducing the will to coordinate.
Similarly this only makes sense under a view where technical work can’t have much impact on p(doom) by itself, aka “regardless of technical work we’re screwed”. Otherwise even in a “free-for-all” world, our actions do influence odds of success, because you can do technical work that people use, and that reduces p(doom).
Oh, my probability on level 6 or level 7 specifications becoming the default in AI is dominated by my probability that I’m somehow misunderstanding what they’re supposed to be. (A level 7 spec for AGI seems impossible even in theory, e.g. because it requires solving the halting problem.)
If we ignore the misunderstanding part then I’m at << 1% probability on “we build transformative AI using GSA with level 6 / level 7 specifications in the nearish future”.
(I could imagine a pause on frontier AI R&D, except that you are allowed to proceed if you have level 6 / level 7 specifications; and those specifications are used in a few narrow domains. My probability on that is similar to my probability on a pause.)
I’m claiming something more like “[given a realistic degree of technical work on current agendas in the time we have], there will be some existentially risky failures left, so if we proceed we’re highly likely to get doom.
I’ll clarify more below.
Sure, but I mostly don’t buy p(doom) reduction here, other than through [highlight near-misses] - so that an approach that hides symptoms of fundamental problems is probably net negative.
In the free-for-all world, I think doom is overdetermined, absent miracles [1]- and [significantly improved debate setup] does not strike me as a likely miracle, even after I condition on [a miracle occurred].
Factors that push in the other direction:
I can imagine techniques that reduce near-term widespread low-stakes failures.
This may be instrumentally positive if e.g. AI is much better for collective sensemaking than otherwise it would be (even if that’s only [the negative impact isn’t as severe]).
Similarly, I can imagine such techniques mitigating the near-term impact of [we get what we measure] failures. This too seems instrumentally useful.
I do accept that technical work I’m not too keen on may avoid some early foolish/embarrassing ways to fail catastrophically.
I mostly don’t think this helps significantly, since we’ll consistently hit doom later without a change in strategy.
Nonetheless, [don’t be dead yet] is instrumentally useful if we want more time to change strategy, so avoiding early catastrophe is a plus.
[probably other things along similar lines that I’m missing]
But I suppose that on the [usefulness of debate (/scalable oversight techniques generally) research], I’m mainly thinking: [more clearly understanding how and when this may fail catastrophically, and how we’d robustly predict this] seems positive, whereas [show that versions of this technique get higher scores on some benchmarks] probably doesn’t.
Even if I’m wrong about the latter, the former seems more important.
Granted, it also seems harder—but I think that having a bunch of researchers focus on it and fail to come up with any principled case is useful too (at least for them).
Agreed. This is why my main hope on this routes through [work on level 6⁄7 specifications clarifies the depth and severity of the problem] and [more-formally-specified 6⁄7 specifications give us something to point to in regulation].
(on the level 7, I’m assuming “in all contexts” must be an overstatement; in particular, we only need something like ”...in all contexts plausibly reachable from the current state, given that all powerful AIs developed by us or our AIS follow this specification or this-specification-endorsed specifications”)
Clarifications I’d make on my [doom seems likely, but not inevitable; some technical work seems net negative] position:
If I expected that we had 25 years to get things right, I think I’d be pretty keen on most hands-on technical approaches (debate included).
Quite a bit depends on the type of technical work. I like the kind of work that plausibly has the property [if we iterate on this we’ll probably notice all catastrophic problems before triggering them].
I do think there’s a low-but-non-zero chance of breakthroughs in pretty general technical work. I can’t rule out that ARC theory come up with something transformational in the next few years (or that it comes from some group that’s outside my current awareness).
I’m not ruling out an [AI assistants help us make meaningful alignment progress] path—I currently think it’s unlikely, not impossible.
However, here I note that there’s a big difference between:
The odds that [solve alignment with AI assistants] would work if optimally managed.
The odds that it works in practice.
I worry that researchers doing technical research tend to have the the former in mind (implicitly, subconsciously) - i.e. the (implicit) argument is something like “Our work stands a good chance to unlock a winning strategy here”.
But this is not the question—the question is how likely it is to work in practice.
(even conditioning on not-obviously-reckless people being in charge)
It’s guesswork, but on [does a low-risk winning strategy of this form exist (without a huge slowdown)?] I’m perhaps 25%. On [will we actually find and implement such a strategy, even assuming the most reckless people aren’t a factor], I become quite a bit more pessimistic—if I start to say “10%”, I recoil at the implied [40% shot at finding and following a good enough path if one exists].
Of course a lot here depends on whether we can do well enough to fail safely. Even a 5% shot is obviously great if the other 95% is [we realize it’s not working, and pivot].
However, since I don’t see debate-like approaches as plausible in any direct-path-to-alignment sense, I’d like to see a much clearer plan for using such methods as stepping-stones to (stepping stones to...) a solution.
In particular, I’m interested in the case for [if this doesn’t work, we have principled reasons to believe it’ll fail safely] (as an overall process, that is—not on each individual experiment).
When I look at e.g. Buck/Ryan’s outlined iteration process here,[2] I’m not comforted on this point: this has the same structure as [run SGD on passing our evals], only it’s [run researcher iteration on passing our evals]. This is less bad, but still entirely loses the [evals are an independent check on an approach we have principled reasons to think will work] property.
On some level this kind of loop is unavoidable—but having the “core workflow” of alignment researchers be [tweak the approach, then test it against evals] seems a bit nuts.
Most of the hope here seems to come from [the problem is surprisingly (to me) easy] or [catastrophic failure modes are surprisingly (to me) sparse].
In the sense that [someone proves the Riemann hypothesis this year] would be a miracle.
I note that it’s not clear they’re endorsing this iteration process—it may just be that they expect it to be the process, so that it’s important for people to be thinking in these terms.
Okay, I think it’s pretty clear that the crux between us is basically what I was gesturing at in my first comment, even if there are minor caveats that make it not exactly literally that.
That’s fair. I agree that we’re not likely to resolve much by continuing this discussion. (but thanks for engaging—I do think I understand your position somewhat better now)
What does seem worth considering is adjusting research direction to increase focus on [search for and better understand the most important failure modes] - both of debate-like approaches generally, and any [plan to use such techniques to get useful alignment work done].
I expect that this would lead people to develop clearer, richer models.
Presumably this will take months rather than hours, but it seems worth it (whether or not I’m correct—I expect that [the understanding required to clearly demonstrate to me that I’m wrong] would be useful in a bunch of other ways).