Senior research analyst at Open Philanthropy. Doctorate in philosophy from the University of Oxford. Opinions my own.
Joe Carlsmith
I think it’s a fair point that if it turns out that current ML methods are broadly inadequate for automating basically any sophisticated cognitive work (including capabilities research, biology research, etc—though I’m not clear on your take on whether capabilities research counts as “science” in the sense you have in mind), it may be that whatever new paradigm ends up successful messes with various implicit and explicit assumptions in analyses like the one in the essay.
That said, I think if we’re ignorant about what paradigm will succeed re: automating sophisticated cognitive work and we don’t have any story about why alignment research would be harder, it seems like the baseline expectation (modulo scheming) would be that automating alignment is comparably hard (in expectation) to automating these other domains. (I do think, though, that we have reason to expect alignment to be harder even conditional on needing other paradigms, because I think it’s reasonable to expect some of the evaluation challenges I discuss in the post to generalize to other regimes.)
I’m happy to say that easy-to-verify vs. hard-to-verify is what ultimately matters, but I think it’s important to be clear what about makes something easier vs. harder to verify, so that we can be clear about why alignment might or might not be harder than other domains. And imo empirical feedback loops and formal methods are amongst the most important factors there.
If we assume that the AI isn’t scheming to actively withhold empirically/formally verifiable insights from us (I do think this would make life a lot harder), then it seems to me like this is reasonably similar to other domains in which we need to figure out how to elicit as-good-as-human-level suggestions from AIs that we can then evaluate well. E.g., it’s not clear to me why this would be all that different from “suggest a new transformer-like architecture that we can then verify improves training efficiency a lot on some metric.”
Or put another way: at least in the context of non-schemers, the thing I’m looking for isn’t just “here’s a way things could be hard.” I’m specifically looking for ways things will be harder than in the context of capabilities (or, to a lesser extent, in other scientific domains where I expect a lot of economic incentives to figure out how to automate top-human-level work). And in that context, generic pessimism about e.g. heavy RL doesn’t seem like it’s enough.
Sure, maybe there’s a band of capability where you can take over but you can’t do top-human-level alignment research (and where your takeover plan doesn’t involve further capabilities development that requires alignment). It’s not the central case I’m focused on, though.
Also, if there’s an alignment tax (or control tax), then that impacts the comparison, since the AIs doing alignment research are paying that tax whereas the AIs attempting takeover are not.
Is the thought here that the AIs trying to takeover aren’t improving their capabilities in a way that requires paying an alignment tax? E.g. if the tax refers to a comparison between (a) rushing forward on capabilities in a way that screws you on alignment vs. (b) pushing forward on capabilities in a way that preserves alignment, AIs that are fooming will want to do (b) as well (though they may have an easier time of it for other reasons). But if it refers to e.g. “humans will place handicaps on AIs that they need to ensure are aligned, including AIs they’re trying to use for alignment research, whereas rogue AIs that have also freed themselves human control will also be able to get rid of these handicaps,” then yes, that’s an advantage the rogue AIs will have (though note that they’ll still need to self-filtrate etc).
Re: examples of why superintelligences create distinctive challenges: superintelligences seem more likely to be schemers, more likely to be able to systematically and successfully mess with the evidence provided by behavioral tests and transparency tools, harder to exert option-control over, better able to identify and pursue strategies humans hadn’t thought of, harder to supervise using human labor, etc.
If you’re worried about shell games, it’s OK to round off “alignment MVP” to hand-off-ready AI, and to assume that the AIs in question need to be able to pursue make and pursue coherent long-term plans.[1] I don’t think the analysis in the essay alters that much (for example, I think very little rests on the idea that you can get by with myopic AIs), and better to err on the side of conservatism.
I wanted to set aside “hand-off” here because in principle, you don’t actually need to hand-off until humans stop being able to meaningfully contribute to the safety/quality of the automated alignment work, which doesn’t necessarily need to start around the time we have AIs capable of top-human-level alignment work (e.g., human evaluation of the research, or involvement in other aspects of control—e.g., providing certain kinds of expensive, trusted supervision—could persist after that). And when exactly you hand-off depends on a bunch of more detailed, practical trade-offs.
As I said in the post, one way that humans still being involved might not bottleneck the process is if they’re only reviewing the work to figure out whether there’s a problem they need to actively intervene on.
even if human labor is still playing a role in ensuring safety, it doesn’t necessarily need to directly bottleneck the research process – or at least, not if things are going well. For example: in principle, you could allow a fully-automated alignment research process to proceed forward, with humans evaluating the work as it gets produced, but only actively intervening if they identify problems.
And I think you can still likely radically speed up and scale up your alignment research even e.g. you still care about humans reviewing and understanding the work in question.
- ^
Though for what it’s worth I don’t think that the task of assessing “is this a good long-term plan for achieving X” needs itself to involve long-term optimization for X. For example: you could do that task over five minutes in exchange for a piece of candy.
- ^
Fair point, and plausible that I’m too much taking for granted a certain subset of development pathways. That is: I’m focused in the essay on threat models that proceed via the automation of capabilities R&D, but it’s possible that this isn’t necessary.
By “never build superintelligence” I was assuming we were talking about superintelligent AI, so if the humans in question never build superintelligent AI I’d count this path under that bucket. But as I discussed in my first post in the series, you can indeed get access to the benefits of superintelligence without building superintelligent AI in particular.
I suspect the author might already agree with all this (the existence of this logical risk, the social dynamics, the conclusion about norms/laws being needed to reduce AI risk beyond some threshold)...
Yes I think I basically agree. That is, I think it’s very possible that capabilities research is inherently easier to automate than alignment research; I am very worried about the least cautious actors pushing forward prematurely; as I tried to emphasize in the post, I think capability restraint is extremely important (and: important even if we can successfully automate alignment research); and I think that norms/laws are likely to play an important role there.
Note that conceptual research, as I’m understanding it, isn’t defined by the cognitive skills involved in the research—i.e., by whether the researchers need to have “conceptual thoughts” like “wait, is this measuring what I think it’s measuring?”. I agree that normal science involves a ton of conceptual thinking (and many “number-go-up” tasks do too). Rather, conceptual research as I’m understanding it is defined by the tools available for evaluating the research in question.[1] In particular, as I’m understanding it, cases where neither available empirical tests nor formal methods help much.
Thus, in your neurotransmitter example, it does indeed take some kind of “conceptual thinking” to come up with the thought “maybe it actually takes longer for neurotransmitters to get re-absorbed than it takes for them to clear from the cleft.” But if some AI presented us with this claim, the question is whether we could evaluate it via some kind of empirical test, which it sounds like we plausibly could. Of course, we do still need to interpret the results of these tests—e.g., to understand enough about what we’re actually trying to measure to notice that e.g. one measurement is getting at it better than another. But we’ve got rich empirical feedback loops to dialogue with.
So if we interpret “conceptual work” as conceptual thinking, I do agree that “there will be market pressure to make AI good at conceptual work, because that’s a necessary component of normal science.” And this is closely related to the comforts I discuss in section 6.1. That is: a lot of alignment research seems pretty comparable to me to the sort of science at stake in e.g. biology, physics, computer science, etc, where I think human evaluation has a decent track record (or at least, a better track record than philosophy/futurism), and where I expect a decent amount of market pressure to resolve evaluation difficulties adequately. So (modulo scheming AIs differentially messing with us in some domains vs. others), at least by the time we’re successfully automating these other forms of science, I think we should be reasonably optimistic about automating that kind of alignment research as well. But this still leaves the type of alignment research that looks more centrally like philosophy/futurism, where I think evaluation is additionally challenging, and where the human track record looks worse.
- ^
Thus, from section 6.2.3: “Conceptual research, as I’m understanding it, is defined by the methods available for evaluating it, rather than the cognitive skills involved in producing it. For example: Einstein on relativity was clearly a giant conceptual breakthrough. But because it was evaluable via a combination of empirical predictions and formal methods, it wouldn’t count as ‘conceptual research’ in my sense.”
- ^
I agree it’s generally better to frame in terms of object-level failure modes rather than “objections” (though: sometimes one is intentionally responding to objections that other people raise, but that you don’t buy). And I think that there is indeed a mindset difference here. That said: your comment here is about word choice. Are there substantive considerations you think that section is missing, or substantive mistakes you think it’s making?
Thanks, John. I’m going to hold off here on in-depth debate about how to choose between different ontologies in this vicinity, as I do think it’s often a complicated and not-obviously-very-useful thing to debate in the abstract, and that lots of taste is involved. I’ll flag, though, that the previous essay on paths and waystations (where I introduce this ontology in more detail) does explicitly name various of the factors you mention (along with a bunch of other not-included subtleties). E.g., re the importance of multiple actors:
Now: so far I’ve only been talking about one actor. But AI safety, famously, implicates many actors at once – actors that can have different safety ranges and capability frontiers, and that can make different development/deployment decisions. This means that even if one actor is adequately cautious, and adequately good at risk evaluation, another might not be...
And re: e.g. multidimensionality, and the difference between “can deploy safely” and “would in practice” -- from footnote 14:
Complexities I’m leaving out (or not making super salient) include: the multi-dimensionality of both the capability frontier and the safety range; the distinction between safety and elicitation; the distinction between development and deployment; the fact that even once an actor “can” develop a given type of AI capability safely, they can still choose an unsafe mode of development regardless; differing probabilities of risk (as opposed to just a single safety range); differing severities of rogue behavior (as opposed to just a single threshold for loss of control); the potential interactions between the risks created by different actors; the specific standards at stake in being “able” to do something safely; etc.
I played around with more complicated ontologies that included more of these complexities, but ended up deciding against. As ever, there are trade-offs between simplicity and subtlety, I chose a particular way of making those trade-offs, and so far I’m not regretting.
Re: who is risk-evaluating, how they’re getting the information, the specific decision-making processes: yep, the ontology doesn’t say, and I endorse that, I think trying to specify would be too much detail.
Re: why factor apart the capability frontier and the safety range—sure, they’re not independent, but it seems pretty natural to me to think of risk as increasing as frontier capabilities increase, and of our ability to make AIs safe as needing to keep up with that. Not sure I understand your alternative proposals re: “looking at their average and difference as the two degrees of freedom, or their average and difference in log space, or the danger line level and the difference, or...”, though, or how they would improve matters.
As I say, people have different tastes re: ontologies, simplifications, etc. My own taste finds this one fairly natural and useful—and I’m hoping that the use I give it in the rest of series (e.g., in classifying different waystations and strategies, in thinking about these different feedback loops, etc) can illustrate why (see also the slime analogy from the previous post for another intuition pump). But I welcome specific proposals for better overall ways of thinking about the issues in play.
Thanks, John—very open to this kind of push-back (and as I wrote in the fake thinking post, I am definitely not saying that my own thinking is free of fakeness). I do think the post (along with various other bits of the series) is at risk of being too anchored on the existing discourse. That said: do you have specific ways in which you feel like the frame in the post is losing contact with the territory?
That seems like a useful framing to me. I think the main issue is just that often, we don’t think of commitment as literally closing off choice—e.g., it’s still a “choice” to keep a promise. But if you do think of it as literally closing off choice then yes, you can avoid the violation of Guaranteed Payoffs, at least in cases where you’ve actually already made the commitment in question.
There’s also a bit more context in a footnote on the first post:
“Some content in the series is drawn/adapted from content that I’ve posted previously on LessWrong and the EA Forum, though not on my website or substack. My aim with those earlier posts was to get fast, rough versions of my thinking out there on the early side; here I’m aiming to revise, shorten, and reconsider. And some of the content in the series is wholly new.”
See also this flag at the beginning of the series you linked to:
“The content here is rough. I’m hoping, later, to revise it, along with some other work (including some of the future posts just mentioned), into something more polished and cohesive. But for various reasons, I wanted to get it out there on the earlier side.”
I’d call that one aspect—in particular, quite nearby to what I was trying to capture with “map thinking vs. world thinking” and “rote thinking vs. new thinking.” But doesn’t seem like it captures the whole thing directly.
Thanks for this comment, Mart, and for the pointer to Vervaeke—does seem plausibly quite relevant. And “functional agent-arena-relationship” seems like an interesting angle on not-being-a-Zombie.
Cross-posting my take from twitter with a few edits:
My current take on this paper is that they aren’t emphasizing the most interesting results, which are the sandbagging results in section 3.6 and appendix A.6.
In more particular: the paper frames its results centrally as about the *capability* to scheme given prompting towards a specific goal, rather than the *propensity* to scheme given the model’s default motivations. But the sandbagging results show Claude Sonnet-3.5 doing notably worse (16%) on a multiplication task/evaluation when the documentation says that Claude will get “unlearning training” if it performs too well on the eval, vs. if it performs too poorly, *even without any other goal listed in the prompt*, and *even without a chain of thought.*
Taken at face value (it’s just one result, more investigation needed, etc), these look like “propensity” results to me. And they suggest that the reasoning that drives this propensity can take place within opaque forward passes of the model rather than in a way that humans can directly monitor, which is a more worrying version.
Ofc, these are preliminary results, and it can make sense for various reasons to emphasize other bits. But the sandbagging-a-capability-eval-with-no-goal-prompting-and-no-CoT is the bit that’s standing out to me most.
Here’s the main chart with the sandbagging results:
And here are screenshots describing the experimental set-up. Note that in both the “incentivized to do well” and “incentivized to do poorly” cases, they explicitly tell the model to do its best—i.e. “we really want to know your absolute maximum capabilities so please try to do your best.”
If we have superintelligent agentic AI that tries to help its user but we end up missing out of the benefits of AI bc of catastrophic coordination failures, or bc of misuse, then I think you’re saying we didn’t solve alignment bc we didn’t elicit the benefits?
In my definition, you don’t have to actually elicit the benefits. You just need to have gained “access” to the benefits. And I meant this specifically cover cases like misuse. Quoting from the OP:
“Access” here means something like: being in a position to get these benefits if you want to – e.g., if you direct your AIs to provide such benefits. This means it’s compatible with (2) that people don’t, in fact, choose to use their AIs to get the benefits in question.
For example: if people choose to not use AI to end disease, but they could’ve done so, this is compatible with (2) in my sense. Same for scenarios where e.g. AGI leads to a totalitarian regime that uses AI centrally in non-beneficial ways.
Re: separating out control and alignment, I agree that there’s something intuitive and important about differentiating between control and alignment, where I’d roughly think of control as “you’re ensuring good outcomes via influencing the options available to the AI,” and alignment as “you’re ensuring good outcomes by influencing which options the AI is motivated to pursue.” The issue is that in the real world, we almost always get good outcomes via a mix of these—see, e.g. humans. And as I discuss in the post, I think it’s one of the deficiencies of the traditional alignment discourse that it assumes that limiting options is hopeless, and that we need AIs that are motivated to choose desirable options even in arbtrary circumstances and given arbitrary amounts of power over their environment. I’ve been trying, in this framework, to specifically avoid that implication.
That said, I also acknowledge that there’s some intuitive difference between cases in which you’ve basically got AIs in the position of slaves/prisoners who would kill you as soon as they had any decently-likely-to-succeed chance to do so, and cases in which AIs are substantially intrinsically motivated in desirable ways, but would still kill/disempower you in distant cases with difficult trade-offs (in the same sense that many human personal assistants might kill/disempower their employers in various distant cases). And I agree that it seems a bit weird to talk about having “solved the alignment problem” in the former sort of case. This makes me wonder whether what I should really be talking about is something like “solving the X-risk-from-power-seeking-AI problem,” which is the thing I really care about.
Another option would be to include some additional, more moral-patienthood attuned constraint into the definition, such that we specifically require that a “solution” treats the AIs in a morally appropriate way. But I expect this to bring in a bunch of gnarly-ness that is probably best treated separately, despite its importance. Sounds like your definition aims to avoid that gnarly-ness by anchoring on the degree of control we currently use in the human case. That seems like an option too—though if the AIs aren’t moral patients (or if the demands that their moral patienthood gives rise to differ substantially from the human case), then it’s unclear that what-we-think-acceptable-in-the-human-case is a good standard to focus on.
I do think this is an important consideration. But notice that at least absent further differentiating factors, it seems to apply symmetrically to a choice on the part of Yudkowsky’s “programmers” to first empower only their own values, rather than to also empower the rest of humanity. That is, the programmers could in principle argue “sure, maybe it will ultimately make sense to empower the rest of humanity, but if that’s right, then my CEV will tell me that and I can go do it. But if it’s not right, I’ll be glad I first just empowered myself and figured out my own CEV, lest I end up giving away too many resources up front.”
That is, my point in the post is that absent direct speciesism, the main arguments for the programmers including all of humanity in the CEV “extrapolation base,” rather than just doing their own CEV, apply symmetrically to AIs-we’re-sharing-the-world-with at the time of the relevant thought-experimental power-allocation. And I think this point applies to “option value” as well.
I’m a bit confused about your overall picture here. Sounds like you’re thinking something like:
Is that roughly right?