This makes it even clearer that Altman’s claims of ignorance were lies – he cannot possibly have believed that former employees unanimously signed non-disparagements for free!
This is still quoting Neel, right? Presumably you intended to indent it.
This makes it even clearer that Altman’s claims of ignorance were lies – he cannot possibly have believed that former employees unanimously signed non-disparagements for free!
This is still quoting Neel, right? Presumably you intended to indent it.
Have you looked through the FLI faculty listed there?
How many seem useful supervisors for this kind of thing? Why?
If we’re sticking to the [generate new approaches to core problems] aim, I can see three or four I’d be happy to recommend, conditional on their agreeing upfront to the exploratory goals, and that publication would not be necessary (or a very low concrete number agreed upon).
There are about ten more that seem not-obviously-a-terrible-idea, but probably not great (e.g. those who I expect have a decent understanding of the core problems, but basically aren’t working on them).
The majority don’t write anything that suggests they know what the core problems are.
For almost all of these supervisors, doing a PhD would seem to provide quite a few constraints, undesirable incentives, and an environment that’s poor.
From an individual’s point of view this can still make sense, if it’s one of the only ways to get stable medium-term funding.
From a funder’s point of view, it seems nuts.
(again, less nuts if the goal were [incremental progress on prosaic approaches, and generation of a respectable publication record])
A few points here (all with respect to a target of “find new approaches to core problems in AGI alignment”):
It’s not clear to me what the upside of the PhD structure is supposed to be here (beyond respectability). If the aim is to avoid being influenced by most of the incentives and environment, that’s more easily achieved by not doing a PhD. (to the extent that development of research ‘taste’/skill acts to service a publish-or-perish constraint, that’s likely to be harmful)
This is not to say that there’s nothing useful about an academic context—only that the sensible approach seems to be [create environments with some of the same upsides, but fewer downsides].
I can see a more persuasive upside where the PhD environment gives:
Access to deep expertise in some relevant field.
The freedom to explore openly (without any “publish or perish” constraint).
This seems likely to be both rare, and more likely for professors not doing ML. I note here that ML professors are currently not solving fundamental alignment problems—we’re not in a [Newtonian physics looking for Einstein] situation; more [Aristotelian physics looking for Einstein]. I can more easily imagine a mathematics PhD environment being useful than an ML one (though I’d expect this to be rare too).
This is also not to say that a PhD environment might not be useful in various other ways. For example, I think David Krueger’s lab has done and is doing a bunch of useful stuff—but it’s highly unlikely to uncover new approaches to core problems.
For example, of the 213 concrete problems posed here how many would lead us to think [it’s plausible that a good answer to this question leads to meaningful progress on core AGI alignment problems]? 5? 10? (many more can be a bit helpful for short-term safety)
There are a few where sufficiently general answers would be useful, but I don’t expect such generality—both since it’s hard, and because incentives constantly push towards [publish something on this local pattern], rather than [don’t waste time running and writing up experiments on this local pattern, but instead investigate underlying structure].
I note that David’s probably at the top of my list for [would be a good supervisor for this kind of thing, conditional on having agreed the exploratory aims at the outset], but the environment still seems likely to be not-close-to-optimal, since you’d be surrounded by people not doing such exploratory work.
RFPs seem a good tool here for sure. Other coordination mechanisms too.
(And perhaps RFPs for RFPs, where sketching out high-level desiderata is easier than specifying parameters for [type of concrete project you’d like to see])
Oh and I think the MATS Winter Retrospective seems great from the [measure a whole load of stuff] perspective. I think it’s non-obvious what conclusions to draw, but more data is a good starting point. It’s on my to-do-list to read it carefully and share some thoughts.
I agree with Tsvi here (as I’m sure will shock you :)).
I’d make a few points:
“our revealed preferences largely disagree with point 1”—this isn’t clear at all. We know MATS’ [preferences, given the incentives and constraints under which MATS operates]. We don’t know what you’d do absent such incentives and constraints.
I note also that “but we aren’t Refine” has the form [but we’re not doing x], rather than [but we have good reasons not to do x]. (I don’t think MATS should be Refine, but “we’re not currently 20% Refine-on-ramp” is no argument that it wouldn’t be a good idea)
MATS is in a stronger position than most to exert influence on the funding landscape. Sure, others should make this case too, but MATS should be actively making a case for what seems most important (to you, that is), not only catering to the current market.
Granted, this is complicated by MATS’ own funding constraints—you have more to lose too (and I do think this is a serious factor, undesirable as it might be).
If you believe that the current direction of the field isn’t great, then “ensure that our program continues to meet the talent needs of safety teams” is simply the wrong goal.
Of course the right goal isn’t diametrically opposed to that—but still, not that.
There’s little reason to expect the current direction of the field to be close to ideal:
At best, the accuracy of the field’s collective direction will tend to correspond to its collective understanding—which is low.
There are huge commercial incentives exerting influence.
There’s no clarity on what constitutes (progress towards) genuine impact.
There are many incentives to work on what’s already not neglected (e.g. things with easily located “tight empirical feedback loops”). The desirable properties of the non-neglected directions are a large part of the reason they’re not neglected.
Similar arguments apply to [field-level self-correction mechanisms].
Given (4), there’s an inherent sampling bias in taking [needs of current field] as [what MATS should provide]. Of course there’s still an efficiency upside in catering to [needs of current field] to a large extent—but efficiently heading in a poor direction still sucks.
I think it’s instructive to consider extreme-field-composition thought experiments: suppose the field were composed of [10,000 researchers doing mech interp] [10 researchers doing agent foundations].
Where would there be most jobs? Most funding? Most concrete ideas for further work? Does it follow that MATS would focus almost entirely on meeting the needs of all the mech interp orgs? (I expect that almost all the researchers in that scenario would claim mech interp is the most promising direction)
If you think that feedback loops along the lines of [[fast legible work on x] --> [x seems productive] --> [more people fund and work on x]] lead to desirable field dynamics in an AIS context, then it may make sense to cater to the current market. (personally, I expect this to give a systematically poor signal, but it’s not as though it’s easy to find good signals)
If you don’t expect such dynamics to end well, it’s worth considering to what extent MATS can be a field-level self-correction mechanism, rather than a contributor to predictably undesirable dynamics.
I’m not claiming this is easy!!
I’m claiming that it should be tried.
Detailing what job and funding opportunities should exist in the technical AI safety field is beyond the scope of this report.
Understandable, but do you know anyone who’s considering this? As the core of their job, I mean—not on a [something they occasionally think/talk about for a couple of hours] level. It’s non-obvious to me that anyone at OpenPhil has time for this.
It seems to me that the collective ‘decision’ we’ve made here is something like:
Any person/team doing this job would need:
Extremely good AIS understanding.
To be broadly respected.
Have a lot of time.
Nobody like this exists.
We’ll just hope things work out okay using a passive distributed approach.
To my eye this leads to a load of narrow optimization according to often-not-particularly-enlightened metrics—lots of common incentives, common metrics, and correlated failure.
Oh and I still think MATS is great :) - and that most of these issues are only solvable with appropriate downstream funding landscape alterations. That said, I remain hopeful that MATS can nudge things in a helpful direction.
For reference there’s this: What I learned running Refine
When I talked to Adam about this (over 12 months ago), he didn’t think there was much to say beyond what’s in that post. Perhaps he’s updated since.
My sense is that I view it as more of a success than Adam does. In particular, I think it’s a bit harsh to solely apply the [genuinely new directions discovered] metric. Even when doing everything right, I expect the hit rate to be very low there, with [variation on current framing/approach] being the most common type of success.
Agreed that Refine’s timescale is clearly too short.
However, a much longer program would set a high bar for whoever’s running it.
Personally, I’d only be comfortable doing so if the setup were flexible enough that it didn’t seem likely to limit the potential of participants (by being less productive-in-the-sense-desired than counterfactual environments).
(understood that you’d want to avoid the below by construction through the specification)
I think the worries about a “least harmful path” failure mode would also apply to a “below 1 catastrophic event per millennium” threshold. It’s not obvious to me that the vast majority of ways to [avoid significant risk of catastrophe-according-to-our-specification] wouldn’t be highly undesirable outcomes.
It seems to me that “greatly penalize the additional facts which are enforced” is a two-edged sword: we want various additional facts to be highly likely, since our acceptability specification doesn’t capture everything that we care about.
I haven’t thought about it in any detail, but doesn’t using time-bounded utility functions also throw out any acceptability guarantee for outcomes beyond the time-bound?
[again, the below is all in the spirit of “I think this direction is plausibly useful, and I’d like to see more work on it”]
not to have any mental influences on people other than those which factor through the system’s pre-agreed goals being achieved in the world.
Sure, but this seems to say “Don’t worry, the malicious superintelligence can only manipulate your mind indirectly”. This is not the level of assurance I want from something calling itself “Guaranteed safe”.
It is worth noting here that a potential failure mode is that a truly malicious general-purpose system in the box could decide to encode harmful messages in irrelevant details
This is one mechanism by which such a system could cause great downstream harm.
Suppose that we have a process to avoid this. What assurance do we have that there aren’t other mechanisms to cause harm?
I don’t yet buy the description complexity penalty argument (as I currently understand it—but quite possibly I’m missing something). It’s possible to manipulate by strategically omitting information. Perhaps the “penalise heavily biased sampling” is intended to avoid this (??). If so, I’m not sure how this gets us more than a hand-waving argument.
I imagine it’s very hard to do indirect manipulation without adding much complexity.
I imagine that ASL-4+ systems are capable of many very hard things.
Similar reasoning leads me to initial skepticism of all [safety guarantee by penalizing some-simple-x] claims. This amounts to a claim that reducing x necessarily makes things safer—which I expect is untrue for any simple x.
I can buy that there are simple properties whose reduction guarantees safety if it’s done to an extreme degree—but then I’m back to expecting the system to do nothing useful.
As an aside, I’d note that such processes (e.g. complexity penalties) seem likely to select out helpful behaviours too. That’s not a criticism of the overall approach—I just want to highlight that I don’t think we get to have both [system provides helpful-in-ways-we-hadn’t-considered output] and [system can’t produce harmful output]. Allowing the former seems to allow the latter.
I would like to fund a sleeper-agents-style experiment on this by the end of 2025
That’s probably a good idea, but this kind of approach doesn’t seem in keeping with a “Guaranteed safe” label. More of a “We haven’t yet found a way in which this is unsafe”.
This seems interesting, but I’ve seen no plausible case that there’s a version of (1) that’s both sufficient and achievable. I’ve seen Davidad mention e.g. approaches using boundaries formalization. This seems achievable, but clearly not sufficient. (boundaries don’t help with e.g. [allow the mental influences that are desirable, but not those that are undesirable])
The [act sufficiently conservatively for safety, relative to some distribution of safety specifications] constraint seems likely to lead to paralysis (either of the form [AI system does nothing], or [AI system keeps the world locked into some least-harmful path], depending on the setup—and here of course “least harmful” isn’t a utopia, since it’s a distribution of safety specifications, not desirability specifications).
Am I mistaken about this?
I’m very pleased that people are thinking about this, but I fail to understand the optimism—hopefully I’m confused somewhere!
Is anyone working on toy examples as proof of concept?
I worry that there’s so much deeply technical work here that not enough time is being spent to check that the concept is workable (is anyone focusing on this?). I’d suggest focusing on mental influences: what kind of specification would allow me to radically change my ideas, but not to be driven insane? What’s the basis to think we can find such a specification?
It seems to me that finding a fit-for-purpose safety/acceptability specification won’t be significantly easier than finding a specification for ambitious value alignment.
So no, not disincentivizing making positive EV bets, but updating about the quality of decision-making that has happened in the past.
I think there’s a decent case that such updating will indeed disincentivize making positive EV bets (in some cases, at least).
In principle we’d want to update on the quality of all past decision-making. That would include both [made an explicit bet by taking some action] and [made an implicit bet through inaction]. With such an approach, decision-makers could be punished/rewarded with the symmetry required to avoid undesirable incentives (mostly).
Even here it’s hard, since there’d always need to be a [gain more influence] mechanism to balance the possibility of losing your influence.
In practice, most of the implicit bets made through inaction go unnoticed—even where they’re high-stakes (arguably especially when they’re high-stakes: most counterfactual value lies in the actions that won’t get done by someone else; you won’t be punished for being late to the party when the party never happens).
That leaves the explicit bets. To look like a good decision-maker the incentive is then to make low-variance explicit positive EV bets, and rely on the fact that most of the high-variance, high-EV opportunities you’re not taking will go unnoticed.
From my by-no-means-fully-informed perspective, the failure mode at OpenPhil in recent years seems not to be [too many explicit bets that don’t turn out well], but rather [too many failures to make unclear bets, so that most EV is left on the table]. I don’t see support for hits-based research. I don’t see serious attempts to shape the incentive landscape to encourage sufficient exploration. It’s not clear that things are structurally set up so anyone at OP has time to do such things well (my impression is that they don’t have time, and that thinking about such things is no-one’s job (?? am I wrong ??)).
It’s not obvious to me whether the OpenAI grant was a bad idea ex-ante. (though probably not something I’d have done)
However, I think that another incentive towards middle-of-the-road, risk-averse grant-making is the last thing OP needs.
That said, I suppose much of the downside might be mitigated by making a distinction between [you wasted a lot of money in ways you can’t legibly justify] and [you funded a process with (clear, ex-ante) high negative impact].
If anyone’s proposing punishing the latter, I’d want it made very clear that this doesn’t imply punishing the former. I expect that the best policies do involve wasting a bunch of money in ways that can’t be legibly justified on the individual-funding-decision level.
Some thoughts:
Necessary conditions aren’t sufficient conditions. Lists of necessary conditions can leave out the hard parts of the problem.
The hard part of the problem is in getting a system to robustly behave according to some desirable pattern (not simply to have it know and correctly interpret some specification of the pattern).
I don’t see any reason to think that prompting would achieve this robustly.
As an attempt at a robust solution, without some other strong guarantee of safety, this is indeed a terrible idea.
I note that I don’t expect trying it empirically to produce catastrophe in the immediate term (though I can’t rule it out).
I also don’t expect it to produce useful understanding of what would give a robust generalization guarantee.
With a lot of effort we might achieve [we no longer notice any problems]. This is not a generalization guarantee. It is an outcome I consider plausible after putting huge effort into eliminating all noticeable problems.
The “capabilities are very important [for safety]” point seems misleading:
Capabilities create the severe risks in the first place.
We can’t create a safe AGI without advanced capabilities, but we may be able to understand how to make an AGI safe without advanced capabilities.
There’s no ”...so it makes sense that we’re working on capabilities” corollary here.
The correct global action would be to try gaining theoretical understanding for a few decades before pushing the cutting edge on capabilities. (clearly this requires non-trivial coordination!)
I think it’s important to distinguish between:
Has understood a load of work in the field.
Has understood all known fundamental difficulties.
It’s entirely possible to achieve (1) without (2).
I’d be wary of assuming that any particular person has achieved (2) without good evidence.
Relevant here is Geoffrey Irving’s AXRP podcast appearance. (if anyone already linked this, I missed it)
I think Daniel Filan does a good job there both in clarifying debate and in questioning its utility (or at least the role of debate-as-solution-to-fundamental-alignment-subproblems). I don’t specifically remember satisfying answers to your (1)/(2)/(3), but figured it’s worth pointing at regardless.
Despite not answering all possible goal-related questions a priori, the reductionist perspective does provide a tractable research program for improving our understanding of AI goal development. It does this by reducing questions about goals to questions about behaviors observable in the training data.
[emphasis mine]
This might be described as “a reductionist perspective”. It is certainly not “the reductionist perspective”, since reductionist perspectives need not limit themselves to “behaviors observable in the training data”.
A more reasonable-to-my-mind behavioral reductionist perspective might look like this.
Ruling out goal realism as a good way to think does not leave us with [the particular type of reductionist perspective you’re highlighting].
In practice, I think the reductionist perspective you point at is:
Useful, insofar as it answers some significant questions.
Highly misleading if we ever forget that [this perspective doesn’t show us that x is a problem] doesn’t tell us [x is not a problem].
Sure, understood.
However, I’m still unclear what you meant by “This level of understanding isn’t sufficient for superhuman persuasion.”. If ‘this’ referred to [human coworker level], then you’re correct (I now guess you did mean this ??), but it seems a mildly strange point to make. It’s not clear to me why it’d be significant in the context without strong assumptions on correlation of capability in different kinds of understanding/persuasion.
I interpreted ‘this’ as referring to the [understanding level of current models]. In that case it’s not clear to me that this isn’t sufficient for superhuman persuasion capability. (by which I mean having the capability to carry out at least one strategy that fairly robustly results in superhuman persuasiveness in some contexts)
Do current models have better understanding of text authors than the human coworkers of these authors? I expect this isn’t true right now (though it might be true for more powerful models for people who have written a huge amount of stuff online). This level of understanding isn’t sufficient for superhuman persuasion.
Both “better understanding” and in a sense “superhuman persuasion” seem to be too coarse a way to think about this (I realize you’re responding to a claim-at-similar-coarseness).
Models don’t need to capable of a pareto improvement on human persuasion strategies, to have one superhuman strategy in one dangerous context. This seems likely to require understanding something-about-an-author better than humans, not everything-about-an-author better.
Overall, I’m with you in not (yet) seeing compelling reasons to expect a super-human persuasion strategy to emerge from pretraining before human-level R&D.
However, a specific [doesn’t understand an author better than coworkers] → [unlikely there’s a superhuman persuasion strategy] argument seems weak.
It’s unclear to me what kinds of understanding are upstream pre-requisites of at least one [get a human to do what you want] strategy. It seems pretty easy to miss possibilities here.
If we don’t understand what the model would need to infer from context in order to make a given strategy viable, it may be hard to provide the relevant context for an evaluation.
Obvious-to-me adjustments don’t necessarily help. E.g. giving huge amounts of context, since [inferences about author given input ()] are not a subset of [inferences about author given input ( … )].
Thanks for the thoughtful response.
A few thoughts:
If length is the issue, then replacing “leads” with “led” would reflect the reality.
I don’t have an issue with titles like ”...Improving safety...” since it has a [this is what this line of research is aiming at] vibe, rather than a [this is what we have shown] vibe. Compare “curing cancer using x” to “x cures cancer”.
Also in that particular case your title doesn’t suggest [we have achieved AI control]. I don’t think it’s controversial that control would improve safety, if achieved.
I agree that this isn’t a huge deal in general—however, I do think it’s usually easy to fix: either a [name a process, not a result] or a [say what happened, not what you guess it implies] approach is pretty general.
Also agreed that improving summaries is more important. Quite hard to achieve given the selection effects: [x writes a summary on y] tends to select for [x is enthusiastic about y] and [x has time to write a summary]. [x is enthusiastic about y] in turn selects for [x misunderstands y to be more significant than it is].
Improving this situation deserves thought and effort, but seems hard. Great communication from the primary source is clearly a big plus (not without significant time cost, I’m sure). I think your/Buck’s posts on the control stuff are commendably clear and thorough.
I expect the paper itself is useful (I’ve still not read it). In general I’d like the focus to be on understanding where/how/why debate fails—both in the near-term cases, and the more exotic cases (though I expect the latter not to look like debate-specific research). It’s unsurprising that it’ll work most of the time in some contexts. Completely fine for [show a setup that works] to be the first step, of course—it’s just not the interesting bit.
I’d be curious what the take is of someone who disagrees with my comment.
(I’m mildly surprised, since I’d have predicted more of a [this is not a useful comment] reaction, than a [this is incorrect] reaction)
I’m not clear whether the idea is that:
The title isn’t an overstatement.
The title is not misleading. (e.g. because “everybody knows” that it’s not making a claim of generality/robustness)
The title will not mislead significant amounts of people in important ways. It’s marginally negative, but not worth time/attention.
There are upsides to the current name, and it seems net positive. (e.g. if it’d get more attention, and [paper gets attention] is considered positive)
This is the usual standard, so [it’s fine] or [it’s silly to complain about] or …?
Something else.
I’m not claiming that this is unusual, or a huge issue on its own.
I am claiming that the norms here seem systematically unhelpful.
I’m more interested in the general practice than this paper specifically (though I think it’s negative here).
I’d be particularly interested in a claim of (4) - and whether the idea here is something like [everyone is doing this, it’s an unhelpful equilibrium, but if we unilaterally depart from it it’ll hurt what we care about and not fix the problem]. (this seems incorrect to me, but understandable)
Interesting—I look forward to reading the paper.
However, given that most people won’t read the paper (or even the abstract), could I appeal for paper titles that don’t overstate the generality of the results. I know it’s standard practice in most fields not to bother with caveats in the title, but here it may actually matter if e.g. those working in governance think that you’ve actually shown “Debating with More Persuasive LLMs Leads to More Truthful Answers”, rather than “In our experiments, Debating with More Persuasive LLMs Led to More Truthful Answers”.
The title matters to those who won’t read the paper, and can’t easily guess at the generality of what you’ll have shown (e.g. that your paper doesn’t include theoretical results suggesting that we should expect this pattern to apply robustly or in general). Again, I know this is a general issue—this just happens to be a context where I can point this out with high confidence without having read the paper :).
On your (2), I think you’re ignoring an understanding-related asymmetry:
Without clear models describing (a path to) a solution, it is highly unlikely we have a workable solution to a deep and complex problem:
Absence of concrete [we have (a path to) a solution] is pretty strong evidence of absence.
[EDIT for clarity, by “we have” I mean “we know of”, not “there exists”; I’m not claiming there’s strong evidence that no path to a solution exists]
Whether or not we have clear models of a problem, it is entirely possible for it to exist and to kill us:
Absence of concrete [there-is-a-problem] evidence is weak evidence of absence.
A problem doesn’t have to wait until we have formal arguments or strong, concrete empirical evidence for its existence before killing us. To claim that it’s “premature” to shut down the field before we have [evidence of type x], you’d need to make a case that [doom before we have evidence of type x] is highly unlikely.
A large part of the MIRI case is that there is much we don’t understand, and that parts of the problem we don’t understand are likely to be hugely important. An evidential standard that greatly down-weights any but the most rigorous, legible evidence is liable to lead to death-by-sampling-bias.
Of course it remains desirable for MIRI arguments to be as legible and rigorous as possible. Empiricism would be nice too (e.g. if someone could come up with concrete problems whose solution would be significant evidence for understanding something important-according-to-MIRI about alignment).
But ignoring the asymmetry here is a serious problem.
On your (3), it seems to me that you want “skeptical” to do more work than is reasonable. I agree that we “should be skeptical of purely theoretical arguments for doom”—but initial skepticism does not imply [do not update much on this]. It implies [consider this very carefully before updating]. It’s perfectly reasonable to be initially skeptical but to make large updates once convinced.
I do not think [the arguments are purely theoretical] is one of your true objections—rather it’s that you don’t find these particular theoretical arguments convincing. That’s fine, but no argument against theoretical arguments.