(Later added disclaimer: it’s a good idea to add “I feel like...” before the judgment in this comment, so that you keep in mind that I’m talking about my impressions and frustrations, rarely stating obvious facts (despite the language making it look so))
Thanks for trying to understand my point and asking me for more details. I appreciate it.
Yet I feel weird when trying to answer, because my gut reaction to your comment is that you’re asking the wrong question? Also, the compression of my view to “EY’s stances seem to you to be mostly distracting people from the real work” sounds more lossy than I’m comfortable with. So let me try to clarify and focus on these feelings and impressions, then I’ll answer more about which success stories or directions excite me.
My current problem with EY’s stances is twofold:
First, in posts like this one, he literally writes that everything done under the label of alignment is faking it and not even attacking the problem, except like 3 people who even if they’re trying have it all wrong. I think this is completely wrong, and that’s even more annoying because I find that most people working on alignment are trying far harder harder to justify why they expect their work to matter than EY and the old-school MIRI team ever did.
This is a problem because it doesn’t help anyone working on the field to maybe solve the problems with their approaches that EY sees, which sounds like a massive missed opportunity.
This is also a problem because EY’s opinions are still quite promoted in the community (especially here on the AF and LW), such that newcomers going for what the founder of the field has to say go away with the impression that no one is doing valuable work.
Far more speculative (because I don’t know EY personally), but I expect that kind of judgment to not come so much from a place of all encompassing genius but instead from generalization after reading some posts/papers. And I’ve received messages following this thread of people who were just as annoyed as I, and felt their results had been dismissed without even a comment or classified as trivial when everyone else, including the authors, were quite surprised by them. I’m ready to give EY a bit of “he just sees further than most people”, but not enough that he can discard the whole field from reading a couple of AF posts.
Second, historically, a lot of MIRI’s work has followed a specific epistemic strategy of trying to understand what are the optimal ways of deciding and thinking, both to predict how an AGI would actually behave and to try to align it. I’m not that convinced by this approach, but even giving it the benefit of the doubt, it has by no way lead to any accomplishments big enough to justify EY (and MIRI’s ?) highly veiled contempt for anyone not doing that. This had and still has many bad impacts on the field and new entrants.
A specific subgroup of people tend to be nerd-sniped by this older MIRI’s work, because it’s the only part of the field that is more formal, but IMO at the loss of most of what matters about alignment and most of the grounding.
People who don’t have the technical skill to work on MIRI’s older work feel like they have to skill up drastically in maths to be able to do anything relevant in alignment. I literally mentored three people like that, who could actually do a lot of good thinking and cared about alignment, and had to push it in their head that they didn’t need super advanced maths skills, except if they wanted to do very very specific things. I find that particularly sad because IMO the biggest positive contribution to the field by EY and early MIRI comes from their less formal and more philosophical work, which is exactly the kind of work that is stilted by the consequences of this stance.
I also feel people here underestimate how repelling this whole attitude has been for years for most people outside the MIRI bubble. From testimonials by a bunch of more ML people and how any discussion of alignment needs to clarify that you don’t share MIRI’s contempt with experimental work and not doing only decision theory and logic, I expect that this has been one of the big factors in alignment not being taken seriously and people not wanting to work on it.
Also important to note that I don’t know if EY and MIRI still think this kind of technical research is highly valuable and the real research and what should be done, but they have been influential enough that I think a big part of the damage is done, and I read some parts of this post as “If only we could do the real logic thing, but we can’t so we’re doomed”. Also there’s a question of the separation between the image that MIRI and EY projects and what they actually think.
Going back to your question, it has a weird double standard feel. Like, every AF post on more prosaic alignment methods comes with its success story, and reason for caring about the research. If EY and MIRI want to argue that we’re all doomed, they have the burden of proof to explain why everything that’s been done is terrible and will never lead to alignment. Once again, proving that we won’t be able to solve a problem is incredibly hard and improbable. Funny how everyone here gets that for the “AGI is impossible question”, but apparently that doesn’t apply to “Actually working with AIs and Thinking about real AIs will never let you solve alignment in time.”
Still, it’s not too difficult to list a bunch of promising stuff, so here’s a non-exhaustive list:
John Wentworth’s Natural Abstraction Hypothesis, which is about checking his formalism-backed intuition that NNs actually learn similar abstractions that humans do. The success story is pretty obvious, in that if John is right, alignment should be far easier.
People from EleutherAI working on understanding LMs and GPT-like models as simulators of processes (called simulacra), as well as the safety benefits (corrigibility) and new strategies (leveraging the output distribution in smart ways) that this model allows.
Evan Hubinger’s work on finding predicates that we could check during training to avoid deception and behaviors we’re worried about. He has a full research agenda but it’s not public yet. Maybe our post on myopic decision theory could be relevant.
Stuart Armstrong’s work on model splintering, especially his AI Safety Subprojects which are experimental, not obvious what they will find, and directly relevant to implementing and using model splintering to solve alignment
Paul Christiano’s recent work on making question-answerers give useful information instead of what they expect humans to answer, which has a clear success story for these kinds of powerful models and their use in building stronger AIs and supervising training for example.
It’s also important to remember how alignment and the related problems and ideas are still not that well explained, distilled and analyzed for teaching and criticism. So I’m excited too about work that isn’t directly solving alignment but just making things clearer and more explicit, like Evan’s recent post or my epistemic strategies analysis.
Thanks for naming specific work you think is really good! I think it’s pretty important here to focus on the object-level. Even if you think the goodness of these particular research directions isn’t cruxy (because there’s a huge list of other things you find promising, and your view is mainly about the list as a whole rather than about any particular items on it), I still think it’s super important for us to focus on object-level examples, since this will probably help draw out what the generators for the disagreement are.
John Wentworth’s Natural Abstraction Hypothesis, which is about checking his formalism-backed intuition that NNs actually learn similar abstractions that humans do. The success story is pretty obvious, in that if John is right, alignment should be far easier.
Eliezer liked this post enough that he asked me to signal-boost it in the MIRI Newsletter back in April.
And Paul Christiano and Stuart Armstrong are two of the people Eliezer named as doing very-unusually good work. We continue to pay Stuart to support his research, though he’s mainly supported by FHI.
And Evan works at MIRI, which provides some Bayesian evidence about how much we tend to like his stuff. :)
So maybe there’s not much disagreement here about what’s relatively good? (Or maybe you’re deliberately picking examples you think should be ‘easy sells’ to Steel Eliezer.)
The main disagreement, of course, is about how absolutely promising this kind of stuff is, not how relatively promising it is. This could be some of the best stuff out there, but my understanding of the Adam/Eliezer disagreement is that it’s about ‘how much does this move the dial on actually saving the world?’ / ‘how much would we move the dial if we just kept doing more stuff like this?’.
Actually, this feels to me like a thing that your comments have bounced off of a bit. From my perspective, Eliezer’s statement was mostly saying ‘the field as a whole is failing at our mission of preventing human extinction; I can name a few tiny tidbits of relatively cool things (not just MIRI stuff, but Olah and Christiano), but the important thing is that in absolute terms the whole thing is not getting us to the world where we actually align the first AGI systems’.
My Eliezer-model thinks nothing (including MIRI stuff) has moved the dial much, relative to the size of the challenge. But your comments have mostly been about a sort of status competition between decision theory stuff and ML stuff, between prosaic stuff and ‘gain new insights into intelligence’ stuff, between MIRI stuff and non-MIRI stuff, etc. This feels to me like it’s ignoring the big central point (‘our work so far is wildly insufficient’) in order to haggle over the exact ordering of the wildly-insufficient things.
You’re zeroed in on the “vast desert” part, but the central point wasn’t about the desert-oasis contrast, it was that the whole thing is (on Eliezer’s model) inadequate to the task at hand. Likewise, you’re talking a lot about the “fake” part (and misstating Eliezer’s view as “everyone else [is] a faker”), when the actual claim was about “work that seems to me to be mostly fake or pointless or predictable” (emphasis added).
Maybe to you these feel similar, because they’re all just different put-downs. But… if those were true descriptions of things about the field, they would imply very different things.
I would like to put forward that Eliezer thinks, in good faith, that this is the best hypothesis that fits the data. I absolutely think reasonable people can disagree with Eliezer on this, and I don’t think we need to posit any bad faith or personality failings to explain why people would disagree.
Also, I feel like I want to emphasize that, like… it’s OK to believe that the field you’re working in is in a bad state? The social pressure against saying that kind of thing (or even thinking it to yourself) is part of why a lot of scientific fields are unhealthy, IMO. I’m in favor of you not takingfor granted that Eliezer’s right, and pushing back insofar as your models disagree with his. But I want to advocate against:
Saying false things about what the other person is saying. A lot of what you’ve said about Eliezer and MIRI is just obviously false (e.g., we have contempt for “experimental work” and think you can’t make progress by “Actually working with AIs and Thinking about real AIs”).
Shrinking the window of ‘socially acceptable things to say about the field as a whole’ (as opposed to unsolicited harsh put-downs of a particular researcher’s work, where I see more value in being cautious).
I want to advocate ‘smack-talking the field is fine, if that’s your honest view; and pushing back is fine, if you disagree with the view’. I want to see more pushing back on the object level (insofar as people disagree), and less ‘how dare you say that, do you think you’re the king of alignment or something’ or ‘saying that will have bad social consequences’.
I think you’re picking up on a real thing of ‘a lot of people are too deferential to various community leaders, when they should be doing more model-building, asking questions, pushing back where they disagree, etc.’ But I think the solution is to shift more of the conversation to object-level argument (that is, modeling the desired behavior), and make that argument as high-quality as possible.
One thing I want to make clear is that I’m quite aware that my comments have not been as high-quality as they should have been. As I wrote in the disclaimer, I was writing from a place of frustration and annoyance, which also implies a focus on more status-y thing. That sounded necessary to me to air out this frustration, and I think this was a good idea given the upvotes of my original post and the couple of people who messaged me to tell me that they were also annoyed.
That being said, much of what I was railing against is a general perception of the situation, from reading a lot of stuff but not necessarily stopping to study all the evidence before writing a fully though-through opinion. I think this is where the “saying obviously false things” comes from (which I think are pretty easy to believe from just reading this post and a bunch of MIRI write-ups), and why your comments are really important to clarify the discrepancy between this general mental picture I was drawing from and the actual reality. Also recentering the discussion on the object-level instead of on status arguments sounds like a good move.
You make a lot of good points and I definitely want to continue the conversation and have more detailed discussion, but I also feel that for the moment I need to take some steps back, read your comments and some of the pointers in other comments, and think a bit more about the question. I don’t think there’s much more to gain from me answering quickly, mostly in reaction.
(I also had the brilliant idea of starting this thread just when I was on the edge of burning out from working too much (during my holidays), so I’m just going to take some time off from work. But I definitely want to continue this conversation further when I come back, although probably not in this thread ^^)
That sounded necessary to me to air out this frustration, and I think this was a good idea given the upvotes of my original post and the couple of people who messaged me to tell me that they were also annoyed.
If you’d just aired out your frustration, framing claims about others in NVC-like ‘I feel like...’ terms (insofar as you suspect you wouldn’t reflectively endorse them), and then a bunch of people messaged you in private to say “thank you! you captured my feelings really well”, then that would seem clearly great to me.
I’m a bit worried that what instead happened is that you made a bunch of clearly-false claims about other people and gave a bunch of invalid arguments, mixed in with the feelings-stuff; and you used the content warning at the top of the message to avoid having to distinguish which parts of your long, detailed comment are endorsed or not (rather than also flagging this within the comment); and then you also ran with this in a bunch of follow-up comments that were similarly not-endorsed but didn’t even have the top-of-comment disclaimer. So that I could imagine some people who also aren’t independently familiar with all the background facts, could come away with a lot of wrong beliefs about the people you’re criticizing.
‘Other people liked my comment, so it was clearly a good thing’ doesn’t distinguish between the worlds where they like it because they share the feelings, vs. agreeing with the factual claims and arguments (and if the latter, whether they’re noticing and filtering out all the seriously false or not-locally-valid parts). If the former, I think it was good. If the latter, I think it was bad.
I’m a bit worried that what instead happened is that you made a bunch of clearly-false claims about other people and gave a bunch of invalid arguments, mixed in with the feelings-stuff; and you used the content warning at the top of the message to avoid having to distinguish which parts of your long, detailed comment are endorsed or not (rather than also flagging this within the comment); and then you also ran with this in a bunch of follow-up comments that were similarly not-endorsed but didn’t even have the top-of-comment disclaimer. So that I could imagine some people who also aren’t independently familiar with all the background facts, could come away with a lot of wrong beliefs about the people you’re criticizing.
That sounds a bit unfair, in the sense that it makes it look like I just invented stuff I didn’t believe and ran with it. When what actually happen was that I wrote about my frustrations, but made the mistake of stating them as obvious facts instead of impressions.
Of course, I imagine you feel that my portrayal of EY and MIRI was also unfair, sorry about that.
(I added a note to the three most ranty comments on this thread saying that people should mentally add “I feel like...” to judgments in them.)
I’m confused. When I say ‘that’s just my impression’, I mean something like ‘that’s an inside-view belief that I endorse but haven’t carefully vetted’. (See, e.g., Impression Track Records, referring to Naming Beliefs.)
Example: you said that MIRI has “contempt with experimental work and not doing only decision theory and logic”.
My prior guess would have been that you don’t actually, for-real believe that—that it’s not your ‘impression’ in the above sense, more like ‘unendorsed venting/hyperbole that has a more complicated relation to something you really believe’.
If you do (or did) think that’s actually true, then our models of MIRI are much more different than I thought! Alternatively, if you agree this is not true, then that’s all I meant in the previous comment. (Sorry if I was unclear about that.)
I would say that with slight caveats (make “decision theory and logic” a bit larger to include some more mathy stuff and make “all experimental work” a bit smaller to not includes Redwood’s work), this was indeed my model.
What made me update from our discussion is the realization that I interpreted the dismissal of basically all alignment research as “this has no value whatsoever and people doing it are just pretending to care on alignment”, where it should have been interpreted as something like “this is potentially interesting/new/exciting, but it doesn’t look like it brings us closer to solving alignment in a significant way, hence we’re still failing”.
‘Experimental work is categorically bad, but Redwood’s work doesn’t count’ does not sound like a “slight caveat” to me! What does this generalization mean at all if Redwood’s stuff doesn’t count?
(Neither, for that matter, does the difference between ‘decision theory and logic’ and ‘all mathy stuff MIRI has ever focused on’ seem like a ‘slight caveat’ to me—but in that case maybe it’s because I have a lot more non-logic, non-decision-theory examples in my mind that you might not be familiar with, since it sounds like you haven’t read much MIRI stuff?).
(Responding to entire comment thread) Rob, I don’t think you’re modeling what MIRI looks like from the outside very well.
There’s a lot of public stuff from MIRI on a cluster that has as central elements decision theory and logic (logical induction, Vingean reflection, FDT, reflective oracles, Cartesian Frames, Finite Factored Sets...)
There was once an agenda (AAMLS) that involved thinking about machine learning systems, but it was deprioritized, and the people working on it left MIRI.
There was a non-public agenda that involved Haskell programmers. That’s about all I know about it. For all I know they were doing something similar to the modal logic work I’ve seen in the past.
Eliezer frequently talks about how everyone doing ML work is pursuing dead ends, with potentially the exception of Chris Olah. Chris’s work is not central to the cluster I would call “experimentalist”.
There has been one positive comment on the KL-divergence result in summarizing from human feedback. That wasn’t the main point of that paper and was an extremely predictable result.
There has also been one positive comment on Redwood Research, which was founded by people who have close ties to MIRI. The current steps they are taking are not dramatically different from what other people have been talking about and/or doing.
There was a positive-ish comment on aligning narrowly superhuman models, though iirc it gave off more of an impression of “well, let’s at least die in a slightly more dignified way”.
I don’t particularly agree with Adam’s comments, but it does not surprise me that someone could come to honestly believe the claims within them.
So, the point of my comments was to draw a contrast between having a low opinion of “experimental work and not doing only decision theory and logic”, and having a low opinion of “mainstream ML alignment work, and of nearly all work outside the HRAD-ish cluster of decision theory, logic, etc.” I didn’t intend to say that the latter is obviously-wrong; my goal was just to point out how different those two claims are, and say that the difference actually matters, and that this kind of hyperbole (especially when it never gets acknowledged later as ‘oh yeah, that’s not true and wasn’t what I was thinking’) is not great for discussion.
I think it’s true that ‘MIRI is super not into most ML alignment work’, and I think it used to be true that MIRI put almost all of its research effort into HRAD-ish work, and regardless, this all seems like a completely understandable cached impression to have of current-MIRI. If I wrote stuff that makes it sound like I don’t think those views are common, reasonable, etc., then I apologize for that and disavow the thing I said.
But this is orthogonal to what I thought I was talking about, so I’m confused about what seems to me like a topic switch. Maybe the implied background view here is:
‘Adam’s elision between those two claims was a totally normal level of hyperbole/imprecision, like you might find in any LW comment. Picking on word choices like “only decision theory and logic” versus “only research that’s clustered near decision theory and logic in conceptspace”, or “contempt with experimental work” versus “assigning low EV to typical instances of empirical ML alignment work”, is an isolated demand for rigor that wouldn’t make sense as a general policy and isn’t, in any case, the LW norm.’
So, the point of my comments was to draw a contrast between having a low opinion of “experimental work and not doing only decision theory and logic”, and having a low opinion of “mainstream ML alignment work, and of nearly all work outside the HRAD-ish cluster of decision theory, logic, etc.” I didn’t intend to say that the latter is obviously-wrong; my goal was just to point out how different those two claims are, and say that the difference actually matters, and that this kind of hyperbole (especially when it never gets acknowledged later as ‘oh yeah, that’s not true and wasn’t what I meant’) is not great for discussion.
It occurs to me that part of the problem may be preciselythat Adam et al. don’t think there’s a large difference between these two claims (that actually matters). For example, when I query my (rough, coarse-grained) model of [your typical prosaic alignment optimist], the model in question responds to your statement with something along these lines:
If you remove “mainstream ML alignment work, and nearly all work outside of the HRAD-ish cluster of decision theory, logic, etc.” from “experimental work”, what’s left? Perhaps there are one or two (non-mainstream, barely-pursued) branches of “experimental work” that MIRI endorses and that I’m not aware of—but even if so, that doesn’t seem to me to be sufficient to justify the idea of a large qualitative difference between these two categories.
In a similar vein to the above: perhaps one description is (slightly) hyperbolic and the other isn’t. But I don’t think replacing the hyperbolic version with the non-hyperbolic version would substantially change my assessment of MIRI’s stance; the disagreement feels non-cruxy to me. In light of this, I’m not particularly bothered by either description, and it’s hard for me to understand why you view it as such an important distinction.
Moreover: I don’t think [my model of] the prosaic alignment optimist is being stupid here. I think, to the extent that his words miss an important distinction, it is because that distinction is missing from his very thoughts and framing, not because he happened to use choose his words somewhat carelessly when attempting to describe the situation. Insofar as this is true, I expect him to react to your highlighting of this distinction with (mostly) bemusement, confusion, and possibly even some slight suspicion (e.g. that you’re trying to muddy the waters with irrelevant nitpicking).
To be clear: I don’t think you’re attempting to muddy the waters with irrelevant nitpicking here. I think you think the distinction in question is important because it’s pointing to something real, true, and pertinent—but I also think you’re underestimating how non-obvious this is to people who (A) don’t already deeply understand MIRI’s view, and (B) aren’t in the habit of searching for ways someone’s seemingly pointless statement might actually be right.
I don’t consider myself someone who deeply understands MIRI’s view. But I do want to think of myself as someone who, when confronted with a puzzling statement [from someone whose intellectual prowess I generally respect], searches for ways their statement might be right. So, here is my attempt at describing the real crux behind this disagreement:
(with the caveat that, as always, this is my view, not Rob’s, MIRI’s, or anybody else’s)
(and with the additional caveat that, even if my read of the situation turns out to be correct, I think in general the onus is on MIRI to make sure they are understood correctly, rather than on outsiders to try to interpret them—at least, assuming that MIRI wants to make sure they’re understood correctly, which may not always be the best use of researcher time)
I think the disagreement is mostly about MIRI’s counterfactual behavior, not about their actual behavior. I think most observers (including both Adam and Rob) would agree that MIRI leadership has been largely unenthusiastic about a large class of research that currently falls under the umbrella “experimental work”, and that the amount of work in this class MIRI has been unenthused about significantly outweighs the amount of work they have been excited about.
Where I think Adam and Rob diverge is in their respective models of the generator of this observed behavior. I think Adam (and those who agree with him) thinks that the true boundary of the category [stuff MIRI finds unpromising] roughly coincides with the boundary of the category [stuff most researchers would call “experimental work”], such that anything that comes too close to “running ML experiments and seeing what happens” will be met with an immediate dismissal from MIRI. In other words, [my model of] Adam thinks MIRI’s generator is configured such that the ratio of “experimental work” they find promising-to-unpromising would be roughly the same across many possible counterfactual worlds, even if each of those worlds is doing “experiments” investigating substantially different hypotheses.
Conversely, I think Rob thinks the true boundary of the category [stuff MIRI finds unpromising] is mostly unrelated to the boundary of the category [stuff most researchers would call “experimental work”], and that—to the extent MIRI finds most existing “experimental work” unpromising—this is mostly because the existing work is not oriented along directions MIRI finds promising. In other words, [my model of] Rob thinks MIRI’s generator is configured such that the ratio of “experimental work” they find promising-to-unpromising would vary significantly across counterfactual worlds where researchers investigate different hypotheses; in particular, [my model of] Rob thinks MIRI would find most “experimental work” highly promising in the world where the “experiments” being run are those whose results Eliezer/Nate/etc. would consider difficult to predict in advance, and therefore convey useful information regarding the shape of the alignment problem.
I think Rob’s insistence on maintaining the distinction between having a low opinion of “experimental work and not doing only decision theory and logic”, and having a low opinion of “mainstream ML alignment work, and of nearly all work outside the HRAD-ish cluster of decision theory, logic, etc.” is in fact an attempt to gesture at the underlying distinction outlined above, and I think that his stringency on this matter makes significantly more sense in light of this. (Though, once again, I note that I could be completely mistaken in everything I just wrote.)
Assuming, however, that I’m (mostly) not mistaken, I think there’s an obvious way forward in terms of resolving the disagreement: try to convey the underlying generators of MIRI’s worldview. In other words, do the thing you were going to do anyway, and save the discussions about word choice for afterwards.
I also think I naturally interpreted the terms in Adam’s comment as pointing to specific clusters of work in today’s world, rather than universal claims about all work that could ever be done. That is, when I see “experimental work and not doing only decision theory and logic”, I automatically think of “experimental work” as pointing to a specific cluster of work that exists in today’s world (which we might call mainstream ML alignment), rather than “any information you can get by running code”. Whereas it seems you interpreted it as something closer to “MIRI thinks there isn’t any information to get by running code”.
My brain insists that my interpretation is the obvious one and is confused how anyone (within the AI alignment field, who knows about the work that is being done) could interpret it as the latter. (Although the existence of non-public experimental work that isn’t mainstream ML is a good candidate for how you would start to interpret “experimental work” as the latter.) But this seems very plausibly a typical mind fallacy.
EDIT: Also, to explicitly say it, sorry for misunderstanding what you were trying to say. I did in fact read your comments as saying “no, MIRI is not categorically against mainstream ML work, and MIRI is not only working on HRAD-ish stuff like decision theory and logic, and furthermore this should be pretty obvious to outside observers”, and now I realize that is not what you were saying.
This is a good comment! I also agree that it’s mostly on MIRI to try to explain its views, not on others to do painstaking exegesis. If I don’t have a ready-on-hand link that clearly articulates the thing I’m trying to say, then it’s not surprising if others don’t have it in their model.
And based on these comments, I update that there’s probably more disagreement-about-MIRI than I was thinking, and less (though still a decent amount of) hyperbole/etc. If so, sorry about jumping to conclusions, Adam!
Not sure if this helps, and haven’t read the thread carefully, but my sense is your framing might be eliding distinctions that are actually there, in a way that makes it harder to get to the bottom of your disagreement with Adam. Some predictions I’d have are that:
* For almost any experimental result, a typical MIRI person (and you, and Eliezer) would think it was less informative about AI alignment than I would. * For almost all experimental results you would think they were so much less informative as to not be worthwhile. * There’s a small subset of experimental results that we would think are comparably informative, and also a some that you would find much more informative than I would.
(I’d be willing to take bets on these or pick candidate experiments to clarify this.)
In addition, a consequence of these beliefs is that compared to me you think we should be spending way more time sitting around thinking about stuff, and way less time doing experiments, than I do.
I would agree with you that “MIRI hates all experimental work” / etc. is not a faithful representation of this state of affairs, but I think there is nevertheless an important disagreement MIRI has with typical ML people, and that the disagreement is primarily about what we can learn from experiments.
I would agree with you that “MIRI hates all experimental work” / etc. is not a faithful representation of this state of affairs, but I think there is nevertheless an important disagreement MIRI has with typical ML people, and that the disagreement is primarily about what we can learn from experiments.
Ooh, that’s really interesting. Thinking about it, I think my sense of what’s going on is (and I’d be interested to hear how this differs from your sense):
Compared to the average alignment researcher, MIRI tends to put more weight on reasoning like ‘sufficiently capable and general AI is likely to have property X as a strong default, because approximately-X-ish properties don’t seem particularly difficult to implement (e.g., they aren’t computationally intractable), and we can see from first principles that agents will be systematically less able to get what they want when they lack property X’. My sense is that MIRI puts more weight on arguments like this for reasons like:
We’re more impressed with the track record of inside-view reasoning in science.
I suspect this is partly because the average alignment researcher is impressed with how unusually-poorly inside-view reasoning has done in AI—many have tried to gain a deep understanding of intelligence, and many have failed—whereas (for various reasons) MIRI is less impressed with this, and defaults more to the base rate for other fields, where inside-view reasoning has more extraordinary feats under its belt.
We’re more wary of “modest epistemology”, which we think often acts like a self-fulfilling prophecy. (You don’t practice trying to mechanistically model everything yourself, you despair of overcoming biases, you avoid thinking thoughts that would imply you’re a leader or pioneer because that feels arrogant, so you don’t gain as much skill or feedback in those areas.)
Compared to the average alignment researcher, MIRI tends to put less weight on reasoning like ‘X was true about AI in 1990, in 2000, in 2010, and in 2020; therefore X is likely to be true about AGI when it’s developed’. This is for a variety of reasons, including:
MIRI is more generally wary of putting much weight on surface generalizations, if we don’t have an inside-view reason to expect the generalization to keep holding.
MIRI thinks AGI is better thought of as ‘a weird specific sort of AI’, rather than as ‘like existing AI but more so’.
Relatedly, MIRI thinks AGI is mostly insight-bottlenecked (we don’t know how to build it), rather than hardware-bottlenecked. Progress on understanding AGI is much harder to predict than progress on hardware, so we can’t derive as much from trends.
Applying this to experiments:
Some predictions I’d have are that:
* For almost any experimental result, a typical MIRI person (and you, and Eliezer) would think it was less informative about AI alignment than I would. * For almost all experimental results you would think they were so much less informative as to not be worthwhile. * There’s a small subset of experimental results that we would think are comparably informative, and also a some that you would find much more informative than I would.
I’d have the same prediction, though I’m less confident that ‘pessimism about experiments’ is doing much work here, vs. ‘pessimism about alignment’. To distinguish the two, I’d want to look at more conceptual work too, where I’d guess MIRI is also more pessimistic than you (though probably the gap will be smaller?).
I do expect there to be some experiment-specific effect. I don’t know your views well, but if your views are sufficiently like my mental model of ‘typical alignment researcher whose intuitions differ a lot from MIRI’s’, then my guess would be that the disagreement comes down to the above two factors.
1 (more trust in inside view): For many experiments, I’m imagining Eliezer saying ‘I predict the outcome will be X’, and then the outcome is X, and the Modal Alignment Researcher says: ‘OK, but now we’ve validated your intuition—you should be much more confident, and that update means the experiment was still useful.’
To which Hypothetical Eliezer says: ‘I was already more than confident enough. Empirical data is great—I couldn’t have gotten this confident without years of honing my models and intuitions through experience—but now that I’m there, I don’t need to feign modesty and pretend I’m uncertain about everything until I see it with my own eyes.’
2 (less trust in AGI sticking to trends): For many obvious ML experiments Eliezer can’t predict the outcome of, I expect Eliezer to say ‘This experiment isn’t relevant, because factors X, Y, and Z give us strong reason to think that the thing we learn won’t generalize to AGI.’
Which ties back in to 1 as well, because if you don’t think we can build very reliable models in AI without constant empirical feedback, you’ll rarely be confident of abstract reasons X/Y/Z to expect a difference between current ML and AGI, since you can’t go walk up to an AGI today and observe what it’s like.
(You also won’t be confident that X/Y/Z don’t hold—all the possibilities will seem plausible until AGI is actually here, because you generally don’t trust yourself to reason your way to conclusions with much confidence.)
Thanks. For time/brevity, I’ll just say which things I agree / disagree with:
> sufficiently capable and general AI is likely to have property X as a strong default [...]
I generally agree with this, although for certain important values of X (such as “fooling humans for instrumental reasons”) I’m probably more optimistic than you that there will be a robust effort to get not-X, including by many traditional ML people. I’m also probably more optimistic (but not certain) that those efforts will succeed.
[inside view, modest epistemology]: I don’t have a strong take on either of these. My main take on inside views is that they are great for generating interesting and valuable hypotheses, but usually wrong on the particulars.
> less weight on reasoning like ‘X was true about AI in 1990, in 2000, in 2010, and in 2020; therefore X is likely to be true about AGI when it’s developed
> MIRI thinks AGI is better thought of as ‘a weird specific sort of AI’, rather than as ‘like existing AI but more so’.
Probably disagree but hard to tell. I think there will both be a lot of similarities and a lot of differences.
> AGI is mostly insight-bottlenecked (we don’t know how to build it), rather than hardware-bottlenecked
Seems pretty wrong to me. We probably need both insight and hardware, but the insights themselves are hardware-bottlenecked: once you can easily try lots of stuff and see what happens, insights are much easier, see Crick on x-ray crystallography for historical support (ctrl+f for Crick).
> I’d want to look at more conceptual work too, where I’d guess MIRI is also more pessimistic than you
I’m more pessimistic than MIRI about HRAD, though that has selection effects. I’ve found conceptual work to be pretty helpful for pointing to where problems might exist, but usually relatively confused about how to address them or how specifically they’re likely to manifest. (Which is to say, overall highly valuable, but consistent with my take above on inside views.)
[experiments are either predictable or uninformative]: Seems wrong to me. As a concrete example: Do larger models have better or worse OOD generalization? I’m not sure if you’d pick “predictable” or “uninformative”, but my take is: * The outcome wasn’t predictable: within ML there are many people who would have taken each side. (I personally was on the wrong side, i.e. predicting “worse”.) * It’s informative, for two reasons: (1) It shows that NNs “automatically” generalize more than I might have thought, and (2) Asymptotically, we expect the curve to eventually reverse, so when does that happen and how can we study it?
> Most ML experiments either aren’t about interpretability and ‘cracking open the hood’, or they’re not approaching the problem in a way that MIRI’s excited by.
Would agree with “most”, but I think you probably meant something like “almost all”, which seems wrong. There’s lots of people working on interpretability, and some of the work seems quite good to me (aside from Chris, I think Noah Goodman, Julius Adebayo, and some others are doing pretty good work).
I’m not (retroactively in imaginary prehindsight) excited by this problem because neither of the 2 possible answers (3 possible if you count “the same”) had any clear-to-my-model relevance to alignment, or even AGI. AGI will have better OOD generalization on capabilities than current tech, basically by the definition of AGI; and then we’ve got less-clear-to-OpenPhil forces which cause the alignment to generalize more poorly than the capabilities did, which is the Big Problem. Bigger models generalizing better or worse doesn’t say anything obvious to any piece of my model of the Big Problem. Though if larger models start generalizing more poorly, then it takes longer to stupidly-brute-scale to AGI, which I suppose affects timelines some, but that just takes timelines from unpredictable to unpredictable sooo.
If we qualify an experiment as interesting when it can tell anyone about anything, then there’s an infinite variety of experiments “interesting” in this sense and I could generate an unlimited number of them. But I do restrict my interest to experiments which can not only tell me something I don’t know, but tell me something relevant that I don’t know. There is also something to be said for opening your eyes and staring around, but even then, there’s an infinity of blank faraway canvases to stare at, and the trick is to go wandering with your eyes wide open someplace you might see something really interesting. Others will be puzzled and interested by different things and I don’t wish them ill on their spiritual journeys, but I don’t expect the vast majority of them to return bearing enlightenment that I’m at all interested in, though now and then Miles Brundage tweets something (from far outside of EA) that does teach me some little thing about cognition.
I’m interested at all in Redwood Research’s latest project because it seems to offer a prospect of wandering around with our eyes open asking questions like “Well, what if we try to apply this nonviolence predicate OOD, can we figure out what really went into the ‘nonviolence’ predicate instead of just nonviolence?” or if it works maybe we can try training on corrigibility and see if we can start to manifest the tiniest bit of the predictable breakdowns, which might manifest in some different way.
Do larger models generalize better or more poorly OOD? It’s a relatively basic question as such things go, and no doubt of interest to many, and may even update our timelines from ‘unpredictable’ to ‘unpredictable’, but… I’m trying to figure out how to say this, and I think I should probably accept that there’s no way to say it that will stop people from trying to sell other bits of research as Super Relevant To Alignment… it’s okay to have an understanding of reality which makes narrower guesses than that about which projects will turn out to be very relevant.
I’m interested at all in Redwood Research’s latest project because it seems to offer a prospect of wandering around with our eyes open asking questions like “Well, what if we try to apply this nonviolence predicate OOD, can we figure out what really went into the ‘nonviolence’ predicate instead of just nonviolence?” or if it works maybe we can try training on corrigibility and see if we can start to manifest the tiniest bit of the predictable breakdowns, which might manifest in some different way.
Trying to rephrase it in my own words (which will necessarily lose some details), are you interested in Redwood’s research because it might plausibly generate alignment issues and problems that are analogous to the real problem within the safer regime and technology we have now? Which might tell us for example “what aspect of these predictable problems crop up first, and why?”
are you interested in Redwood’s research because it might plausibly generate alignment issues and problems that are analogous to the real problem within the safer regime and technology we have now?
It potentially sheds light on small subpieces of things that are particular aspects that contribute to the Real Problem, like “What actually went into the nonviolence predicate instead of just nonviolence?” Much of the Real Meta-Problem is that you do not get things analogous to the full Real Problem until you are just about ready to die.
I suspect a third important reason is that MIRI thinks alignment is mostly about achieving a certain kind of interpretability/understandability/etc. in the first AGI systems. Most ML experiments either aren’t about interpretability and ‘cracking open the hood’, or they’re not approaching the problem in a way that MIRI’s excited by.
E.g., if you think alignment research is mostly about testing outer reward function to see what first-order behavior they produce in non-AGI systems, rather than about ‘looking in the learned model’s brain’ to spot mesa-optimization and analyze what that optimization is ultimately ‘trying to do’ (or whatever), then you probably won’t produce stuff that MIRI’s excited about regardless of how experimental vs. theoretical your work is.
(In which case, maybe this is not actually a crux for the usefulness of most alignment experiments, and is instead a crux for the usefulness of most alignment research in general.)
(I suspect there are a bunch of other disagreements going into this too, including basic divergences on questions like ‘What’s even the point of aligning AGI? What should humanity do with aligned AGI once it has it?’.)
One tiny note: I was among the people on AAMLS; I did leave MIRI the next year; and my reasons for so doing are not in any way an indictment of MIRI. (I was having some me-problems.)
I still endorse MIRI as, in some sense, being the adults in the AI Safety room, which has… disconcerting effects on my own level of optimism.
Not planning to answer more on this thread, but given how my last messages seem to have confused you, here is my last attempt of sharing my mental model (so you can flag in an answer where I’m wrong in your opinion for readers of this thread)
Also, I just checked on the publication list, and I’ve read or skimmed most things MIRI published since 2014 (including most newsletters and blog posts on MIRI website).
My model of MIRI is that initially, there was a bunch of people including EY who were working mostly on decision theory stuff, tiling, model theory, the sort of stuff I was pointing at. That predates Nate’s arrival, but in my model it becomes far more legible after that (so circa 2014/2015). In my model, I call that “old school MIRI”, and that was a big chunk of what I was pointing out in my original comment.
Then there are a bunch of thing that seem to have happened:
Newer people (Abram and Scott come to mind, but mostly because they’re the one who post on the AF and who I’ve talked to) join this old-school MIRI approach and reshape it into Embedded Agency. Now this new agenda is a bit different from the old-school MIRI work, but I feel like it’s still not that far from decision theory and logic (with maybe a stronger emphasis on the bayesian part for stuff like logical induction). That might be a part where we’re disagreeing.
A direction related to embedded agency and the decision theory and logic stuff, but focused on implementations through strongly typed programming languages like Haskell and type theory. That’s technically practical, but in my mental model this goes in the same category as “decision theory and logic stuff”, especially because that sort of programming is very close to logic and natural deduction.
MIRI starts it’s ML-focused agenda, which you already mentioned. The impression I still have is that this didn’t lead to much published work that was actually experimental, instead focusing on recasting questions of alignment through ML theory. But I’ve updated towards thinking MIRI has invested efforts into looking at stuff from a more prosaic angle, based on looking more into what has been published there, because some of these ML papers had flown under my radar (there’s also the difficulty that when I read a paper by someone who has a position elsewhere now — say Ryan Carey or Stuart Armstrong — I don’t think MIRI but I think of their current affiliation, even though the work was supported by MIRI (and apparently Stuart is still supported by MIRI)). This is the part of the model where I expect that we might have very different models because of your knowledge of what was being done internally and never released.
Some new people hired by MIRI fall into what I call the “Bells Lab MIRI” model, where MIRI just hires/funds people that have different approaches from them, but who they think are really bright (Evan and Vanessa come to mind, although I don’t know if that’s the though process that went into hiring them).
Based on that model and some feedback and impressions I’ve gathered from people of some MIRI researchers being very doubtful of experimental work, that lead to my “all experimental work is useless”. I tried to include Redwood and Chris Olah’s work in there with the caveat (which is a weird model but makes sense if you have a strong prior for “experimental work is useless for MIRI”).
Our discussion made me think that there’s probably far better generators for this general criticism of experimental work, and that they would actually make more sense than “experimental work is useless except this and that”.
From testimonials by a bunch of more ML people and how any discussion of alignment needs to clarify that you don’t share MIRI’s contempt with experimental work and not doing only decision theory and logic
If you were in the situation described by The Rocket Alignment Problem, you could think “working with rockets right now isn’t useful, we need to focus on our conceptual confusions about more basic things” without feeling inherently contemptuous of experimentalism—it’s a tool in the toolbox (which may or may not be appropriate to the task at hand), not a low- or high-status activity on a status hierarchy.
Separately, I think MIRI has always been pretty eager to run experiments in software when they saw an opportunity to test important questions that way. It’s also been 4.5 years now since we announced that we were shifting a lot of resources away from Agent Foundations and into new stuff, and 3 years since we wrote a very long (though still oblique) post about that research, talking about its heavy focus on running software experiments. Though we also made sure to say:
In a sense, you can think of our new research as tackling the same sort of problem that we’ve always been attacking, but from new angles. In other words, if you aren’t excited about logical inductors or functional decision theory, you probably wouldn’t be excited by our new work either.
I don’t think you can say MIRI has “contempt with experimental work” after four years of us mainly focusing on experimental work. There are other disagreements here, but this ties in to a long-standing objection I have to false dichotomies like:
‘we can either do prosaic alignment, or run no experiments’
‘we can either do prosaic alignment, or ignore deep learning’
‘we can either think it’s useful to improve our theoretical understanding of formal agents in toy settings, or think it’s useful to run experiments’
‘we can either think the formal agents work is useful, or think it’s useful to work with state-of-the-art ML systems’
I don’t think Eliezer’s criticism of the field is about experimentalism. I do think it’s heavily about things like ‘the field focuses too much on putting external pressures on black boxes, rather than trying to open the black box’, because (a) he doesn’t think those external-pressures approaches are viable (absent a strong understanding of what’s going on inside the box), and (b) he sees the ‘open the black box’ type work as the critical blocker. (Hence his relative enthusiasm for Chris Olah’s work, which, you’ll notice, is about deep learning and not about decision theory.)
… I find that most people working on alignment are trying far harder harder to justify why they expect their work to matter than EY and the old-school MIRI team ever did.
You’ve had a few comments along these lines in this thread, and I think this is where you’re most severely failing to see the situation from Yudkowsky’s point of view.
From Yudkowsky’s view, explaining and justifying MIRI’s work (and the processes he uses to reach such judgements more generally) was the main point of the sequences. He has written more on the topic than anyone else in the world, by a wide margin. He basically spent several years full-time just trying to get everyone up to speed, because the inductive gap was very very wide.
When I put on my Yudkowsky hat and look at both the OP and your comments through that lens… I imagine if I were Yudkowsky I’d feel pretty exasperated at this point. Like, he’s written a massive volume on the topic, and now ten years later a large chunk of people haven’t even bothered to read it. (In particular, I know (because it’s come up in conversation) that at least a few of the people who talk about prosaic alignment a lot haven’t read the sequences, and I suspect that a disproportionate number haven’t. I don’t mean to point fingers or cast blame here, the sequences are a lot of material and most of it is not legibly relevant before reading it all, but if you haven’t read the sequences and you’re wondering why MIRI doesn’t have a write-up on why they’re not excited about prosaic alignment… well, that’s kinda the write-up. Also I feel like I need a disclaimer here that many people excited about prosaic alignment have read the sequences, I definitely don’t mean to imply that this is everyone in the category.)
(To be clear, I don’t think the sequences explain all of the pieces behind Yudkowsky’s views of prosaic alignment, in depth. They were written for a different use-case. But I do think they explain a lot.)
Related: IMO the best roughly-up-to-date piece explaining the Yudkowsky/MIRI viewpoint is The Rocket Alignment Problem.
You’ve had a few comments along these lines in this thread, and I think this is where you’re most severely failing to see the situation from Yudkowsky’s point of view.
From Yudkowsky’s view, explaining and justifying MIRI’s work (and the processes he uses to reach such judgements more generally) was the main point of the sequences. He has written more on the topic than anyone else in the world, by a wide margin. He basically spent several years full-time just trying to get everyone up to speed, because the inductive gap was very very wide.
My memory of the sequences is that it’s far more about defending and explaining the alignment problem than criticizing prosaic AGI (maybe because the term couldn’t have been used years before Paul coined it?). Could you give me the best pointers of prosaic Alignment criticism in the sequence? I(I’ve read the sequences, but I don’t remember every single post, and my impression for memory is what I’ve written above).
I feel also that there might be a discrepancy between who I think of when I think of prosaic alignment researchers and what the category means in general/to most people here? My category mostly includes AF posters, people from a bunch of places like EleutherAI/OpenAI/DeepMind/Anthropic/Redwood and people from CHAI and FHI. I expect most of these people to actually have read the sequences, and tried to understand MIRI’s perspective. Maybe someone could point out a list of other places where prosaic alignment research is being done that I’m missing, especially places where people probably haven’t read the sequences? Or maybe I’m over estimating how many of the people in the places I mentioned have read the sequences?
I don’t mean to say that there’s critique of prosaic alignment specifically in the sequences. Rather, a lot of the generators of the Yudkowsky-esque worldview are in there. (That is how the sequences work: it’s not about arguing specific ideas around alignment, it’s about explaining enough of the background frames and generators that the argument becomes unnecessary. “Raise the sanity waterline” and all that.)
For instance, just the other day I ran across this:
Of this I learn the lesson: You cannot manipulate confusion. You cannot make clever plans to work around the holes in your understanding. You can’t even make “best guesses” about things which fundamentally confuse you, and relate them to other confusing things. Well, you can, but you won’t get it right, until your confusion dissolves. Confusion exists in the mind, not in the reality, and trying to treat it like something you can pick up and move around, will only result in unintentional comedy.
Similarly, you cannot come up with clever reasons why the gaps in your model don’t matter. You cannot draw a border around the mystery, put on neat handles that let you use the Mysterious Thing without really understanding it—like my attempt to make the possibility that life is meaningless cancel out of an expected utility formula. You can’t pick up the gap and manipulate it.
If the blank spot on your map conceals a land mine, then putting your weight down on that spot will be fatal, no matter how good your excuse for not knowing. Any black box could contain a trap, and there’s no way to know except opening up the black box and looking inside. If you come up with some righteous justification for why you need to rush on ahead with the best understanding you have—the trap goes off.
(The earlier part of the post had a couple embarrassing stories of mistakes Yudkowsky made earlier, which is where the lesson came from.) Reading that, I was like, “man that sure does sound like the Yudkowsky-esque viewpoint on prosaic alignment”.
Or maybe I’m over estimating how many of the people in the places I mentioned have read the sequences?
I think you are overestimating. At the orgs you list, I’d guess at least 25% and probably more than half have not read the sequences. (Low confidence/wide error bars, though.)
Thank you for the links Adam. To clarify, the kind of argument I’m really looking for is something like the following three (hypothetical) examples.
Mesa-optimization is the primary threat model of unaligned AGI systems. Over the next few decades there will be a lot of companies building ML systems that create mesa-optimizers. I think it is within 5 years of current progress that we will understand how ML systems create mesa-optimizers and how to stop it.Therefore I think the current field is adequate for the problem (80%).
When I look at the research we’re outputting, it seems to me to me that we are producing research at a speed and flexibility faster than any comparably sized academic department globally, or the ML industry, and so I am much more hopeful that we’re able to solve our difficult problem before the industry builds an unaligned AGI. I give it a 25% probability, which I suspect is much higher than Eliezer’s.
I basically agree the alignment problem is hard and unlikely to be solved, but I don’t think we have any alternative than the current sorts of work being done, which is a combo of (a) agent foundations work (b) designing theoretical training algorithms (like Paul is) or (c) directly aligning narrowly super intelligent models. I am pretty open to Eliezer’s claim that we will fail but I see no alternative plan to pour resources into.
Whatever you actually think about the field and how it will save the world, say it!
It seems to me that almost all of your the arguments you’ve made work whether the field is a failure or not. The debate here has to pass through whether the field is on-track or not, and we must not sidestep that conversation.
I want to leave this paragraph as social acknowledgment that you mentioned upthread that you’re tired and taking a break, and I want to give you a bunch of social space to not return to this thread for however long you need to take! Slow comments are often the best.
I’m glad that I posted my inflammatory comment, if only because exchanging with you and Rob made me actually consider the question of “what is our story to success”, instead of just “are we making progress/creating valuable knowledge”. And the way you two have been casting it is way less aversive to me that the way EY tends to frame it. This is definitely something I want to think more about. :)
I want to leave this paragraph as social acknowledgment that you mentioned upthread that you’re tired and taking a break, and I want to give you a bunch of social space to not return to this thread for however long you need to take! Slow comments are often the best.
I have sympathy for the “this feels somewhat contemptuous” reading, but I want to push back a bit on the “EY contemptuously calling nearly everyone fakers” angle, because I think “[thinly] veiled contempt” is an uncharitable reading. He could be simply exasperated about the state of affairs, or wishing people would change their research directions but respect them as altruists for Trying At All, or who knows what? I’d rather not overwrite his intentions with our reactions (although it is mostly the job of the writer to ensure their writing communicates the right information [although part of the point of the website discussion was to speak frankly and bluntly]).
(Later added disclaimer: it’s a good idea to add “I feel like...” before the judgment in this comment, so that you keep in mind that I’m talking about my impressions and frustrations, rarely stating obvious facts (despite the language making it look so))
Thanks for trying to understand my point and asking me for more details. I appreciate it.
Yet I feel weird when trying to answer, because my gut reaction to your comment is that you’re asking the wrong question? Also, the compression of my view to “EY’s stances seem to you to be mostly distracting people from the real work” sounds more lossy than I’m comfortable with. So let me try to clarify and focus on these feelings and impressions, then I’ll answer more about which success stories or directions excite me.
My current problem with EY’s stances is twofold:
First, in posts like this one, he literally writes that everything done under the label of alignment is faking it and not even attacking the problem, except like 3 people who even if they’re trying have it all wrong. I think this is completely wrong, and that’s even more annoying because I find that most people working on alignment are trying far harder harder to justify why they expect their work to matter than EY and the old-school MIRI team ever did.
This is a problem because it doesn’t help anyone working on the field to maybe solve the problems with their approaches that EY sees, which sounds like a massive missed opportunity.
This is also a problem because EY’s opinions are still quite promoted in the community (especially here on the AF and LW), such that newcomers going for what the founder of the field has to say go away with the impression that no one is doing valuable work.
Far more speculative (because I don’t know EY personally), but I expect that kind of judgment to not come so much from a place of all encompassing genius but instead from generalization after reading some posts/papers. And I’ve received messages following this thread of people who were just as annoyed as I, and felt their results had been dismissed without even a comment or classified as trivial when everyone else, including the authors, were quite surprised by them. I’m ready to give EY a bit of “he just sees further than most people”, but not enough that he can discard the whole field from reading a couple of AF posts.
Second, historically, a lot of MIRI’s work has followed a specific epistemic strategy of trying to understand what are the optimal ways of deciding and thinking, both to predict how an AGI would actually behave and to try to align it. I’m not that convinced by this approach, but even giving it the benefit of the doubt, it has by no way lead to any accomplishments big enough to justify EY (and MIRI’s ?) highly veiled contempt for anyone not doing that. This had and still has many bad impacts on the field and new entrants.
A specific subgroup of people tend to be nerd-sniped by this older MIRI’s work, because it’s the only part of the field that is more formal, but IMO at the loss of most of what matters about alignment and most of the grounding.
People who don’t have the technical skill to work on MIRI’s older work feel like they have to skill up drastically in maths to be able to do anything relevant in alignment. I literally mentored three people like that, who could actually do a lot of good thinking and cared about alignment, and had to push it in their head that they didn’t need super advanced maths skills, except if they wanted to do very very specific things.
I find that particularly sad because IMO the biggest positive contribution to the field by EY and early MIRI comes from their less formal and more philosophical work, which is exactly the kind of work that is stilted by the consequences of this stance.
I also feel people here underestimate how repelling this whole attitude has been for years for most people outside the MIRI bubble. From testimonials by a bunch of more ML people and how any discussion of alignment needs to clarify that you don’t share MIRI’s contempt with experimental work and not doing only decision theory and logic, I expect that this has been one of the big factors in alignment not being taken seriously and people not wanting to work on it.
Also important to note that I don’t know if EY and MIRI still think this kind of technical research is highly valuable and the real research and what should be done, but they have been influential enough that I think a big part of the damage is done, and I read some parts of this post as “If only we could do the real logic thing, but we can’t so we’re doomed”. Also there’s a question of the separation between the image that MIRI and EY projects and what they actually think.
Going back to your question, it has a weird double standard feel. Like, every AF post on more prosaic alignment methods comes with its success story, and reason for caring about the research. If EY and MIRI want to argue that we’re all doomed, they have the burden of proof to explain why everything that’s been done is terrible and will never lead to alignment. Once again, proving that we won’t be able to solve a problem is incredibly hard and improbable. Funny how everyone here gets that for the “AGI is impossible question”, but apparently that doesn’t apply to “Actually working with AIs and Thinking about real AIs will never let you solve alignment in time.”
Still, it’s not too difficult to list a bunch of promising stuff, so here’s a non-exhaustive list:
John Wentworth’s Natural Abstraction Hypothesis, which is about checking his formalism-backed intuition that NNs actually learn similar abstractions that humans do. The success story is pretty obvious, in that if John is right, alignment should be far easier.
People from EleutherAI working on understanding LMs and GPT-like models as simulators of processes (called simulacra), as well as the safety benefits (corrigibility) and new strategies (leveraging the output distribution in smart ways) that this model allows.
Evan Hubinger’s work on finding predicates that we could check during training to avoid deception and behaviors we’re worried about. He has a full research agenda but it’s not public yet. Maybe our post on myopic decision theory could be relevant.
Stuart Armstrong’s work on model splintering, especially his AI Safety Subprojects which are experimental, not obvious what they will find, and directly relevant to implementing and using model splintering to solve alignment
Paul Christiano’s recent work on making question-answerers give useful information instead of what they expect humans to answer, which has a clear success story for these kinds of powerful models and their use in building stronger AIs and supervising training for example.
It’s also important to remember how alignment and the related problems and ideas are still not that well explained, distilled and analyzed for teaching and criticism. So I’m excited too about work that isn’t directly solving alignment but just making things clearer and more explicit, like Evan’s recent post or my epistemic strategies analysis.
Thanks for naming specific work you think is really good! I think it’s pretty important here to focus on the object-level. Even if you think the goodness of these particular research directions isn’t cruxy (because there’s a huge list of other things you find promising, and your view is mainly about the list as a whole rather than about any particular items on it), I still think it’s super important for us to focus on object-level examples, since this will probably help draw out what the generators for the disagreement are.
Eliezer liked this post enough that he asked me to signal-boost it in the MIRI Newsletter back in April.
And Paul Christiano and Stuart Armstrong are two of the people Eliezer named as doing very-unusually good work. We continue to pay Stuart to support his research, though he’s mainly supported by FHI.
And Evan works at MIRI, which provides some Bayesian evidence about how much we tend to like his stuff. :)
So maybe there’s not much disagreement here about what’s relatively good? (Or maybe you’re deliberately picking examples you think should be ‘easy sells’ to Steel Eliezer.)
The main disagreement, of course, is about how absolutely promising this kind of stuff is, not how relatively promising it is. This could be some of the best stuff out there, but my understanding of the Adam/Eliezer disagreement is that it’s about ‘how much does this move the dial on actually saving the world?’ / ‘how much would we move the dial if we just kept doing more stuff like this?’.
Actually, this feels to me like a thing that your comments have bounced off of a bit. From my perspective, Eliezer’s statement was mostly saying ‘the field as a whole is failing at our mission of preventing human extinction; I can name a few tiny tidbits of relatively cool things (not just MIRI stuff, but Olah and Christiano), but the important thing is that in absolute terms the whole thing is not getting us to the world where we actually align the first AGI systems’.
My Eliezer-model thinks nothing (including MIRI stuff) has moved the dial much, relative to the size of the challenge. But your comments have mostly been about a sort of status competition between decision theory stuff and ML stuff, between prosaic stuff and ‘gain new insights into intelligence’ stuff, between MIRI stuff and non-MIRI stuff, etc. This feels to me like it’s ignoring the big central point (‘our work so far is wildly insufficient’) in order to haggle over the exact ordering of the wildly-insufficient things.
You’re zeroed in on the “vast desert” part, but the central point wasn’t about the desert-oasis contrast, it was that the whole thing is (on Eliezer’s model) inadequate to the task at hand. Likewise, you’re talking a lot about the “fake” part (and misstating Eliezer’s view as “everyone else [is] a faker”), when the actual claim was about “work that seems to me to be mostly fake or pointless or predictable” (emphasis added).
Maybe to you these feel similar, because they’re all just different put-downs. But… if those were true descriptions of things about the field, they would imply very different things.
I would like to put forward that Eliezer thinks, in good faith, that this is the best hypothesis that fits the data. I absolutely think reasonable people can disagree with Eliezer on this, and I don’t think we need to posit any bad faith or personality failings to explain why people would disagree.
Also, I feel like I want to emphasize that, like… it’s OK to believe that the field you’re working in is in a bad state? The social pressure against saying that kind of thing (or even thinking it to yourself) is part of why a lot of scientific fields are unhealthy, IMO. I’m in favor of you not taking for granted that Eliezer’s right, and pushing back insofar as your models disagree with his. But I want to advocate against:
Saying false things about what the other person is saying. A lot of what you’ve said about Eliezer and MIRI is just obviously false (e.g., we have contempt for “experimental work” and think you can’t make progress by “Actually working with AIs and Thinking about real AIs”).
Shrinking the window of ‘socially acceptable things to say about the field as a whole’ (as opposed to unsolicited harsh put-downs of a particular researcher’s work, where I see more value in being cautious).
I want to advocate ‘smack-talking the field is fine, if that’s your honest view; and pushing back is fine, if you disagree with the view’. I want to see more pushing back on the object level (insofar as people disagree), and less ‘how dare you say that, do you think you’re the king of alignment or something’ or ‘saying that will have bad social consequences’.
I think you’re picking up on a real thing of ‘a lot of people are too deferential to various community leaders, when they should be doing more model-building, asking questions, pushing back where they disagree, etc.’ But I think the solution is to shift more of the conversation to object-level argument (that is, modeling the desired behavior), and make that argument as high-quality as possible.
Thanks for your great comments!
One thing I want to make clear is that I’m quite aware that my comments have not been as high-quality as they should have been. As I wrote in the disclaimer, I was writing from a place of frustration and annoyance, which also implies a focus on more status-y thing. That sounded necessary to me to air out this frustration, and I think this was a good idea given the upvotes of my original post and the couple of people who messaged me to tell me that they were also annoyed.
That being said, much of what I was railing against is a general perception of the situation, from reading a lot of stuff but not necessarily stopping to study all the evidence before writing a fully though-through opinion. I think this is where the “saying obviously false things” comes from (which I think are pretty easy to believe from just reading this post and a bunch of MIRI write-ups), and why your comments are really important to clarify the discrepancy between this general mental picture I was drawing from and the actual reality. Also recentering the discussion on the object-level instead of on status arguments sounds like a good move.
You make a lot of good points and I definitely want to continue the conversation and have more detailed discussion, but I also feel that for the moment I need to take some steps back, read your comments and some of the pointers in other comments, and think a bit more about the question. I don’t think there’s much more to gain from me answering quickly, mostly in reaction.
(I also had the brilliant idea of starting this thread just when I was on the edge of burning out from working too much (during my holidays), so I’m just going to take some time off from work. But I definitely want to continue this conversation further when I come back, although probably not in this thread ^^)
Enjoy your rest! :)
If you’d just aired out your frustration, framing claims about others in NVC-like ‘I feel like...’ terms (insofar as you suspect you wouldn’t reflectively endorse them), and then a bunch of people messaged you in private to say “thank you! you captured my feelings really well”, then that would seem clearly great to me.
I’m a bit worried that what instead happened is that you made a bunch of clearly-false claims about other people and gave a bunch of invalid arguments, mixed in with the feelings-stuff; and you used the content warning at the top of the message to avoid having to distinguish which parts of your long, detailed comment are endorsed or not (rather than also flagging this within the comment); and then you also ran with this in a bunch of follow-up comments that were similarly not-endorsed but didn’t even have the top-of-comment disclaimer. So that I could imagine some people who also aren’t independently familiar with all the background facts, could come away with a lot of wrong beliefs about the people you’re criticizing.
‘Other people liked my comment, so it was clearly a good thing’ doesn’t distinguish between the worlds where they like it because they share the feelings, vs. agreeing with the factual claims and arguments (and if the latter, whether they’re noticing and filtering out all the seriously false or not-locally-valid parts). If the former, I think it was good. If the latter, I think it was bad.
(By default I’d assume it’s some mix.)
That sounds a bit unfair, in the sense that it makes it look like I just invented stuff I didn’t believe and ran with it. When what actually happen was that I wrote about my frustrations, but made the mistake of stating them as obvious facts instead of impressions.
Of course, I imagine you feel that my portrayal of EY and MIRI was also unfair, sorry about that.
(I added a note to the three most ranty comments on this thread saying that people should mentally add “I feel like...” to judgments in them.)
Thanks for adding the note! :)
I’m confused. When I say ‘that’s just my impression’, I mean something like ‘that’s an inside-view belief that I endorse but haven’t carefully vetted’. (See, e.g., Impression Track Records, referring to Naming Beliefs.)
Example: you said that MIRI has “contempt with experimental work and not doing only decision theory and logic”.
My prior guess would have been that you don’t actually, for-real believe that—that it’s not your ‘impression’ in the above sense, more like ‘unendorsed venting/hyperbole that has a more complicated relation to something you really believe’.
If you do (or did) think that’s actually true, then our models of MIRI are much more different than I thought! Alternatively, if you agree this is not true, then that’s all I meant in the previous comment. (Sorry if I was unclear about that.)
I would say that with slight caveats (make “decision theory and logic” a bit larger to include some more mathy stuff and make “all experimental work” a bit smaller to not includes Redwood’s work), this was indeed my model.
What made me update from our discussion is the realization that I interpreted the dismissal of basically all alignment research as “this has no value whatsoever and people doing it are just pretending to care on alignment”, where it should have been interpreted as something like “this is potentially interesting/new/exciting, but it doesn’t look like it brings us closer to solving alignment in a significant way, hence we’re still failing”.
‘Experimental work is categorically bad, but Redwood’s work doesn’t count’ does not sound like a “slight caveat” to me! What does this generalization mean at all if Redwood’s stuff doesn’t count?
(Neither, for that matter, does the difference between ‘decision theory and logic’ and ‘all mathy stuff MIRI has ever focused on’ seem like a ‘slight caveat’ to me—but in that case maybe it’s because I have a lot more non-logic, non-decision-theory examples in my mind that you might not be familiar with, since it sounds like you haven’t read much MIRI stuff?).
(Responding to entire comment thread) Rob, I don’t think you’re modeling what MIRI looks like from the outside very well.
There’s a lot of public stuff from MIRI on a cluster that has as central elements decision theory and logic (logical induction, Vingean reflection, FDT, reflective oracles, Cartesian Frames, Finite Factored Sets...)
There was once an agenda (AAMLS) that involved thinking about machine learning systems, but it was deprioritized, and the people working on it left MIRI.
There was a non-public agenda that involved Haskell programmers. That’s about all I know about it. For all I know they were doing something similar to the modal logic work I’ve seen in the past.
Eliezer frequently talks about how everyone doing ML work is pursuing dead ends, with potentially the exception of Chris Olah. Chris’s work is not central to the cluster I would call “experimentalist”.
There has been one positive comment on the KL-divergence result in summarizing from human feedback. That wasn’t the main point of that paper and was an extremely predictable result.
There has also been one positive comment on Redwood Research, which was founded by people who have close ties to MIRI. The current steps they are taking are not dramatically different from what other people have been talking about and/or doing.
There was a positive-ish comment on aligning narrowly superhuman models, though iirc it gave off more of an impression of “well, let’s at least die in a slightly more dignified way”.
I don’t particularly agree with Adam’s comments, but it does not surprise me that someone could come to honestly believe the claims within them.
So, the point of my comments was to draw a contrast between having a low opinion of “experimental work and not doing only decision theory and logic”, and having a low opinion of “mainstream ML alignment work, and of nearly all work outside the HRAD-ish cluster of decision theory, logic, etc.” I didn’t intend to say that the latter is obviously-wrong; my goal was just to point out how different those two claims are, and say that the difference actually matters, and that this kind of hyperbole (especially when it never gets acknowledged later as ‘oh yeah, that’s not true and wasn’t what I was thinking’) is not great for discussion.
I think it’s true that ‘MIRI is super not into most ML alignment work’, and I think it used to be true that MIRI put almost all of its research effort into HRAD-ish work, and regardless, this all seems like a completely understandable cached impression to have of current-MIRI. If I wrote stuff that makes it sound like I don’t think those views are common, reasonable, etc., then I apologize for that and disavow the thing I said.
But this is orthogonal to what I thought I was talking about, so I’m confused about what seems to me like a topic switch. Maybe the implied background view here is:
‘Adam’s elision between those two claims was a totally normal level of hyperbole/imprecision, like you might find in any LW comment. Picking on word choices like “only decision theory and logic” versus “only research that’s clustered near decision theory and logic in conceptspace”, or “contempt with experimental work” versus “assigning low EV to typical instances of empirical ML alignment work”, is an isolated demand for rigor that wouldn’t make sense as a general policy and isn’t, in any case, the LW norm.’
Is that right?
It occurs to me that part of the problem may be precisely that Adam et al. don’t think there’s a large difference between these two claims (that actually matters). For example, when I query my (rough, coarse-grained) model of [your typical prosaic alignment optimist], the model in question responds to your statement with something along these lines:
Moreover: I don’t think [my model of] the prosaic alignment optimist is being stupid here. I think, to the extent that his words miss an important distinction, it is because that distinction is missing from his very thoughts and framing, not because he happened to use choose his words somewhat carelessly when attempting to describe the situation. Insofar as this is true, I expect him to react to your highlighting of this distinction with (mostly) bemusement, confusion, and possibly even some slight suspicion (e.g. that you’re trying to muddy the waters with irrelevant nitpicking).
To be clear: I don’t think you’re attempting to muddy the waters with irrelevant nitpicking here. I think you think the distinction in question is important because it’s pointing to something real, true, and pertinent—but I also think you’re underestimating how non-obvious this is to people who (A) don’t already deeply understand MIRI’s view, and (B) aren’t in the habit of searching for ways someone’s seemingly pointless statement might actually be right.
I don’t consider myself someone who deeply understands MIRI’s view. But I do want to think of myself as someone who, when confronted with a puzzling statement [from someone whose intellectual prowess I generally respect], searches for ways their statement might be right. So, here is my attempt at describing the real crux behind this disagreement:
(with the caveat that, as always, this is my view, not Rob’s, MIRI’s, or anybody else’s)
(and with the additional caveat that, even if my read of the situation turns out to be correct, I think in general the onus is on MIRI to make sure they are understood correctly, rather than on outsiders to try to interpret them—at least, assuming that MIRI wants to make sure they’re understood correctly, which may not always be the best use of researcher time)
I think the disagreement is mostly about MIRI’s counterfactual behavior, not about their actual behavior. I think most observers (including both Adam and Rob) would agree that MIRI leadership has been largely unenthusiastic about a large class of research that currently falls under the umbrella “experimental work”, and that the amount of work in this class MIRI has been unenthused about significantly outweighs the amount of work they have been excited about.
Where I think Adam and Rob diverge is in their respective models of the generator of this observed behavior. I think Adam (and those who agree with him) thinks that the true boundary of the category [stuff MIRI finds unpromising] roughly coincides with the boundary of the category [stuff most researchers would call “experimental work”], such that anything that comes too close to “running ML experiments and seeing what happens” will be met with an immediate dismissal from MIRI. In other words, [my model of] Adam thinks MIRI’s generator is configured such that the ratio of “experimental work” they find promising-to-unpromising would be roughly the same across many possible counterfactual worlds, even if each of those worlds is doing “experiments” investigating substantially different hypotheses.
Conversely, I think Rob thinks the true boundary of the category [stuff MIRI finds unpromising] is mostly unrelated to the boundary of the category [stuff most researchers would call “experimental work”], and that—to the extent MIRI finds most existing “experimental work” unpromising—this is mostly because the existing work is not oriented along directions MIRI finds promising. In other words, [my model of] Rob thinks MIRI’s generator is configured such that the ratio of “experimental work” they find promising-to-unpromising would vary significantly across counterfactual worlds where researchers investigate different hypotheses; in particular, [my model of] Rob thinks MIRI would find most “experimental work” highly promising in the world where the “experiments” being run are those whose results Eliezer/Nate/etc. would consider difficult to predict in advance, and therefore convey useful information regarding the shape of the alignment problem.
I think Rob’s insistence on maintaining the distinction between having a low opinion of “experimental work and not doing only decision theory and logic”, and having a low opinion of “mainstream ML alignment work, and of nearly all work outside the HRAD-ish cluster of decision theory, logic, etc.” is in fact an attempt to gesture at the underlying distinction outlined above, and I think that his stringency on this matter makes significantly more sense in light of this. (Though, once again, I note that I could be completely mistaken in everything I just wrote.)
Assuming, however, that I’m (mostly) not mistaken, I think there’s an obvious way forward in terms of resolving the disagreement: try to convey the underlying generators of MIRI’s worldview. In other words, do the thing you were going to do anyway, and save the discussions about word choice for afterwards.
^ This response is great.
I also think I naturally interpreted the terms in Adam’s comment as pointing to specific clusters of work in today’s world, rather than universal claims about all work that could ever be done. That is, when I see “experimental work and not doing only decision theory and logic”, I automatically think of “experimental work” as pointing to a specific cluster of work that exists in today’s world (which we might call mainstream ML alignment), rather than “any information you can get by running code”. Whereas it seems you interpreted it as something closer to “MIRI thinks there isn’t any information to get by running code”.
My brain insists that my interpretation is the obvious one and is confused how anyone (within the AI alignment field, who knows about the work that is being done) could interpret it as the latter. (Although the existence of non-public experimental work that isn’t mainstream ML is a good candidate for how you would start to interpret “experimental work” as the latter.) But this seems very plausibly a typical mind fallacy.
EDIT: Also, to explicitly say it, sorry for misunderstanding what you were trying to say. I did in fact read your comments as saying “no, MIRI is not categorically against mainstream ML work, and MIRI is not only working on HRAD-ish stuff like decision theory and logic, and furthermore this should be pretty obvious to outside observers”, and now I realize that is not what you were saying.
This is a good comment! I also agree that it’s mostly on MIRI to try to explain its views, not on others to do painstaking exegesis. If I don’t have a ready-on-hand link that clearly articulates the thing I’m trying to say, then it’s not surprising if others don’t have it in their model.
And based on these comments, I update that there’s probably more disagreement-about-MIRI than I was thinking, and less (though still a decent amount of) hyperbole/etc. If so, sorry about jumping to conclusions, Adam!
Not sure if this helps, and haven’t read the thread carefully, but my sense is your framing might be eliding distinctions that are actually there, in a way that makes it harder to get to the bottom of your disagreement with Adam. Some predictions I’d have are that:
* For almost any experimental result, a typical MIRI person (and you, and Eliezer) would think it was less informative about AI alignment than I would.
* For almost all experimental results you would think they were so much less informative as to not be worthwhile.
* There’s a small subset of experimental results that we would think are comparably informative, and also a some that you would find much more informative than I would.
(I’d be willing to take bets on these or pick candidate experiments to clarify this.)
In addition, a consequence of these beliefs is that compared to me you think we should be spending way more time sitting around thinking about stuff, and way less time doing experiments, than I do.
I would agree with you that “MIRI hates all experimental work” / etc. is not a faithful representation of this state of affairs, but I think there is nevertheless an important disagreement MIRI has with typical ML people, and that the disagreement is primarily about what we can learn from experiments.
Ooh, that’s really interesting. Thinking about it, I think my sense of what’s going on is (and I’d be interested to hear how this differs from your sense):
Compared to the average alignment researcher, MIRI tends to put more weight on reasoning like ‘sufficiently capable and general AI is likely to have property X as a strong default, because approximately-X-ish properties don’t seem particularly difficult to implement (e.g., they aren’t computationally intractable), and we can see from first principles that agents will be systematically less able to get what they want when they lack property X’. My sense is that MIRI puts more weight on arguments like this for reasons like:
We’re more impressed with the track record of inside-view reasoning in science.
I suspect this is partly because the average alignment researcher is impressed with how unusually-poorly inside-view reasoning has done in AI—many have tried to gain a deep understanding of intelligence, and many have failed—whereas (for various reasons) MIRI is less impressed with this, and defaults more to the base rate for other fields, where inside-view reasoning has more extraordinary feats under its belt.
We’re more wary of “modest epistemology”, which we think often acts like a self-fulfilling prophecy. (You don’t practice trying to mechanistically model everything yourself, you despair of overcoming biases, you avoid thinking thoughts that would imply you’re a leader or pioneer because that feels arrogant, so you don’t gain as much skill or feedback in those areas.)
Compared to the average alignment researcher, MIRI tends to put less weight on reasoning like ‘X was true about AI in 1990, in 2000, in 2010, and in 2020; therefore X is likely to be true about AGI when it’s developed’. This is for a variety of reasons, including:
MIRI is more generally wary of putting much weight on surface generalizations, if we don’t have an inside-view reason to expect the generalization to keep holding.
MIRI thinks AGI is better thought of as ‘a weird specific sort of AI’, rather than as ‘like existing AI but more so’.
Relatedly, MIRI thinks AGI is mostly insight-bottlenecked (we don’t know how to build it), rather than hardware-bottlenecked. Progress on understanding AGI is much harder to predict than progress on hardware, so we can’t derive as much from trends.
Applying this to experiments:
I’d have the same prediction, though I’m less confident that ‘pessimism about experiments’ is doing much work here, vs. ‘pessimism about alignment’. To distinguish the two, I’d want to look at more conceptual work too, where I’d guess MIRI is also more pessimistic than you (though probably the gap will be smaller?).
I do expect there to be some experiment-specific effect. I don’t know your views well, but if your views are sufficiently like my mental model of ‘typical alignment researcher whose intuitions differ a lot from MIRI’s’, then my guess would be that the disagreement comes down to the above two factors.
1 (more trust in inside view): For many experiments, I’m imagining Eliezer saying ‘I predict the outcome will be X’, and then the outcome is X, and the Modal Alignment Researcher says: ‘OK, but now we’ve validated your intuition—you should be much more confident, and that update means the experiment was still useful.’
To which Hypothetical Eliezer says: ‘I was already more than confident enough. Empirical data is great—I couldn’t have gotten this confident without years of honing my models and intuitions through experience—but now that I’m there, I don’t need to feign modesty and pretend I’m uncertain about everything until I see it with my own eyes.’
2 (less trust in AGI sticking to trends): For many obvious ML experiments Eliezer can’t predict the outcome of, I expect Eliezer to say ‘This experiment isn’t relevant, because factors X, Y, and Z give us strong reason to think that the thing we learn won’t generalize to AGI.’
Which ties back in to 1 as well, because if you don’t think we can build very reliable models in AI without constant empirical feedback, you’ll rarely be confident of abstract reasons X/Y/Z to expect a difference between current ML and AGI, since you can’t go walk up to an AGI today and observe what it’s like.
(You also won’t be confident that X/Y/Z don’t hold—all the possibilities will seem plausible until AGI is actually here, because you generally don’t trust yourself to reason your way to conclusions with much confidence.)
Thanks. For time/brevity, I’ll just say which things I agree / disagree with:
> sufficiently capable and general AI is likely to have property X as a strong default [...]
I generally agree with this, although for certain important values of X (such as “fooling humans for instrumental reasons”) I’m probably more optimistic than you that there will be a robust effort to get not-X, including by many traditional ML people. I’m also probably more optimistic (but not certain) that those efforts will succeed.
[inside view, modest epistemology]: I don’t have a strong take on either of these. My main take on inside views is that they are great for generating interesting and valuable hypotheses, but usually wrong on the particulars.
> less weight on reasoning like ‘X was true about AI in 1990, in 2000, in 2010, and in 2020; therefore X is likely to be true about AGI when it’s developed
I agree, see my post On the Risks of Emergent Behavior in Foundation Models. In the past I think I put too much weight on this type of reasoning, and also think most people in ML put too much weight on it.
> MIRI thinks AGI is better thought of as ‘a weird specific sort of AI’, rather than as ‘like existing AI but more so’.
Probably disagree but hard to tell. I think there will both be a lot of similarities and a lot of differences.
> AGI is mostly insight-bottlenecked (we don’t know how to build it), rather than hardware-bottlenecked
Seems pretty wrong to me. We probably need both insight and hardware, but the insights themselves are hardware-bottlenecked: once you can easily try lots of stuff and see what happens, insights are much easier, see Crick on x-ray crystallography for historical support (ctrl+f for Crick).
> I’d want to look at more conceptual work too, where I’d guess MIRI is also more pessimistic than you
I’m more pessimistic than MIRI about HRAD, though that has selection effects. I’ve found conceptual work to be pretty helpful for pointing to where problems might exist, but usually relatively confused about how to address them or how specifically they’re likely to manifest. (Which is to say, overall highly valuable, but consistent with my take above on inside views.)
[experiments are either predictable or uninformative]: Seems wrong to me. As a concrete example: Do larger models have better or worse OOD generalization? I’m not sure if you’d pick “predictable” or “uninformative”, but my take is:
* The outcome wasn’t predictable: within ML there are many people who would have taken each side. (I personally was on the wrong side, i.e. predicting “worse”.)
* It’s informative, for two reasons: (1) It shows that NNs “automatically” generalize more than I might have thought, and (2) Asymptotically, we expect the curve to eventually reverse, so when does that happen and how can we study it?
See also my take on Measuring and Forecasting Risks from AI, especially the section on far-off risks.
> Most ML experiments either aren’t about interpretability and ‘cracking open the hood’, or they’re not approaching the problem in a way that MIRI’s excited by.
Would agree with “most”, but I think you probably meant something like “almost all”, which seems wrong. There’s lots of people working on interpretability, and some of the work seems quite good to me (aside from Chris, I think Noah Goodman, Julius Adebayo, and some others are doing pretty good work).
I’m not (retroactively in imaginary prehindsight) excited by this problem because neither of the 2 possible answers (3 possible if you count “the same”) had any clear-to-my-model relevance to alignment, or even AGI. AGI will have better OOD generalization on capabilities than current tech, basically by the definition of AGI; and then we’ve got less-clear-to-OpenPhil forces which cause the alignment to generalize more poorly than the capabilities did, which is the Big Problem. Bigger models generalizing better or worse doesn’t say anything obvious to any piece of my model of the Big Problem. Though if larger models start generalizing more poorly, then it takes longer to stupidly-brute-scale to AGI, which I suppose affects timelines some, but that just takes timelines from unpredictable to unpredictable sooo.
If we qualify an experiment as interesting when it can tell anyone about anything, then there’s an infinite variety of experiments “interesting” in this sense and I could generate an unlimited number of them. But I do restrict my interest to experiments which can not only tell me something I don’t know, but tell me something relevant that I don’t know. There is also something to be said for opening your eyes and staring around, but even then, there’s an infinity of blank faraway canvases to stare at, and the trick is to go wandering with your eyes wide open someplace you might see something really interesting. Others will be puzzled and interested by different things and I don’t wish them ill on their spiritual journeys, but I don’t expect the vast majority of them to return bearing enlightenment that I’m at all interested in, though now and then Miles Brundage tweets something (from far outside of EA) that does teach me some little thing about cognition.
I’m interested at all in Redwood Research’s latest project because it seems to offer a prospect of wandering around with our eyes open asking questions like “Well, what if we try to apply this nonviolence predicate OOD, can we figure out what really went into the ‘nonviolence’ predicate instead of just nonviolence?” or if it works maybe we can try training on corrigibility and see if we can start to manifest the tiniest bit of the predictable breakdowns, which might manifest in some different way.
Do larger models generalize better or more poorly OOD? It’s a relatively basic question as such things go, and no doubt of interest to many, and may even update our timelines from ‘unpredictable’ to ‘unpredictable’, but… I’m trying to figure out how to say this, and I think I should probably accept that there’s no way to say it that will stop people from trying to sell other bits of research as Super Relevant To Alignment… it’s okay to have an understanding of reality which makes narrower guesses than that about which projects will turn out to be very relevant.
Trying to rephrase it in my own words (which will necessarily lose some details), are you interested in Redwood’s research because it might plausibly generate alignment issues and problems that are analogous to the real problem within the safer regime and technology we have now? Which might tell us for example “what aspect of these predictable problems crop up first, and why?”
It potentially sheds light on small subpieces of things that are particular aspects that contribute to the Real Problem, like “What actually went into the nonviolence predicate instead of just nonviolence?” Much of the Real Meta-Problem is that you do not get things analogous to the full Real Problem until you are just about ready to die.
I suspect a third important reason is that MIRI thinks alignment is mostly about achieving a certain kind of interpretability/understandability/etc. in the first AGI systems. Most ML experiments either aren’t about interpretability and ‘cracking open the hood’, or they’re not approaching the problem in a way that MIRI’s excited by.
E.g., if you think alignment research is mostly about testing outer reward function to see what first-order behavior they produce in non-AGI systems, rather than about ‘looking in the learned model’s brain’ to spot mesa-optimization and analyze what that optimization is ultimately ‘trying to do’ (or whatever), then you probably won’t produce stuff that MIRI’s excited about regardless of how experimental vs. theoretical your work is.
(In which case, maybe this is not actually a crux for the usefulness of most alignment experiments, and is instead a crux for the usefulness of most alignment research in general.)
(I suspect there are a bunch of other disagreements going into this too, including basic divergences on questions like ‘What’s even the point of aligning AGI? What should humanity do with aligned AGI once it has it?’.)
One tiny note: I was among the people on AAMLS; I did leave MIRI the next year; and my reasons for so doing are not in any way an indictment of MIRI. (I was having some me-problems.)
I still endorse MIRI as, in some sense, being the adults in the AI Safety room, which has… disconcerting effects on my own level of optimism.
Not planning to answer more on this thread, but given how my last messages seem to have confused you, here is my last attempt of sharing my mental model (so you can flag in an answer where I’m wrong in your opinion for readers of this thread)
Also, I just checked on the publication list, and I’ve read or skimmed most things MIRI published since 2014 (including most newsletters and blog posts on MIRI website).
My model of MIRI is that initially, there was a bunch of people including EY who were working mostly on decision theory stuff, tiling, model theory, the sort of stuff I was pointing at. That predates Nate’s arrival, but in my model it becomes far more legible after that (so circa 2014/2015). In my model, I call that “old school MIRI”, and that was a big chunk of what I was pointing out in my original comment.
Then there are a bunch of thing that seem to have happened:
Newer people (Abram and Scott come to mind, but mostly because they’re the one who post on the AF and who I’ve talked to) join this old-school MIRI approach and reshape it into Embedded Agency. Now this new agenda is a bit different from the old-school MIRI work, but I feel like it’s still not that far from decision theory and logic (with maybe a stronger emphasis on the bayesian part for stuff like logical induction). That might be a part where we’re disagreeing.
A direction related to embedded agency and the decision theory and logic stuff, but focused on implementations through strongly typed programming languages like Haskell and type theory. That’s technically practical, but in my mental model this goes in the same category as “decision theory and logic stuff”, especially because that sort of programming is very close to logic and natural deduction.
MIRI starts it’s ML-focused agenda, which you already mentioned. The impression I still have is that this didn’t lead to much published work that was actually experimental, instead focusing on recasting questions of alignment through ML theory. But I’ve updated towards thinking MIRI has invested efforts into looking at stuff from a more prosaic angle, based on looking more into what has been published there, because some of these ML papers had flown under my radar (there’s also the difficulty that when I read a paper by someone who has a position elsewhere now — say Ryan Carey or Stuart Armstrong — I don’t think MIRI but I think of their current affiliation, even though the work was supported by MIRI (and apparently Stuart is still supported by MIRI)). This is the part of the model where I expect that we might have very different models because of your knowledge of what was being done internally and never released.
Some new people hired by MIRI fall into what I call the “Bells Lab MIRI” model, where MIRI just hires/funds people that have different approaches from them, but who they think are really bright (Evan and Vanessa come to mind, although I don’t know if that’s the though process that went into hiring them).
Based on that model and some feedback and impressions I’ve gathered from people of some MIRI researchers being very doubtful of experimental work, that lead to my “all experimental work is useless”. I tried to include Redwood and Chris Olah’s work in there with the caveat (which is a weird model but makes sense if you have a strong prior for “experimental work is useless for MIRI”).
Our discussion made me think that there’s probably far better generators for this general criticism of experimental work, and that they would actually make more sense than “experimental work is useless except this and that”.
If you were in the situation described by The Rocket Alignment Problem, you could think “working with rockets right now isn’t useful, we need to focus on our conceptual confusions about more basic things” without feeling inherently contemptuous of experimentalism—it’s a tool in the toolbox (which may or may not be appropriate to the task at hand), not a low- or high-status activity on a status hierarchy.
Separately, I think MIRI has always been pretty eager to run experiments in software when they saw an opportunity to test important questions that way. It’s also been 4.5 years now since we announced that we were shifting a lot of resources away from Agent Foundations and into new stuff, and 3 years since we wrote a very long (though still oblique) post about that research, talking about its heavy focus on running software experiments. Though we also made sure to say:
I don’t think you can say MIRI has “contempt with experimental work” after four years of us mainly focusing on experimental work. There are other disagreements here, but this ties in to a long-standing objection I have to false dichotomies like:
‘we can either do prosaic alignment, or run no experiments’
‘we can either do prosaic alignment, or ignore deep learning’
‘we can either think it’s useful to improve our theoretical understanding of formal agents in toy settings, or think it’s useful to run experiments’
‘we can either think the formal agents work is useful, or think it’s useful to work with state-of-the-art ML systems’
I don’t think Eliezer’s criticism of the field is about experimentalism. I do think it’s heavily about things like ‘the field focuses too much on putting external pressures on black boxes, rather than trying to open the black box’, because (a) he doesn’t think those external-pressures approaches are viable (absent a strong understanding of what’s going on inside the box), and (b) he sees the ‘open the black box’ type work as the critical blocker. (Hence his relative enthusiasm for Chris Olah’s work, which, you’ll notice, is about deep learning and not about decision theory.)
You’ve had a few comments along these lines in this thread, and I think this is where you’re most severely failing to see the situation from Yudkowsky’s point of view.
From Yudkowsky’s view, explaining and justifying MIRI’s work (and the processes he uses to reach such judgements more generally) was the main point of the sequences. He has written more on the topic than anyone else in the world, by a wide margin. He basically spent several years full-time just trying to get everyone up to speed, because the inductive gap was very very wide.
When I put on my Yudkowsky hat and look at both the OP and your comments through that lens… I imagine if I were Yudkowsky I’d feel pretty exasperated at this point. Like, he’s written a massive volume on the topic, and now ten years later a large chunk of people haven’t even bothered to read it. (In particular, I know (because it’s come up in conversation) that at least a few of the people who talk about prosaic alignment a lot haven’t read the sequences, and I suspect that a disproportionate number haven’t. I don’t mean to point fingers or cast blame here, the sequences are a lot of material and most of it is not legibly relevant before reading it all, but if you haven’t read the sequences and you’re wondering why MIRI doesn’t have a write-up on why they’re not excited about prosaic alignment… well, that’s kinda the write-up. Also I feel like I need a disclaimer here that many people excited about prosaic alignment have read the sequences, I definitely don’t mean to imply that this is everyone in the category.)
(To be clear, I don’t think the sequences explain all of the pieces behind Yudkowsky’s views of prosaic alignment, in depth. They were written for a different use-case. But I do think they explain a lot.)
Related: IMO the best roughly-up-to-date piece explaining the Yudkowsky/MIRI viewpoint is The Rocket Alignment Problem.
Thanks for the pushback!
My memory of the sequences is that it’s far more about defending and explaining the alignment problem than criticizing prosaic AGI (maybe because the term couldn’t have been used years before Paul coined it?). Could you give me the best pointers of prosaic Alignment criticism in the sequence? I(I’ve read the sequences, but I don’t remember every single post, and my impression for memory is what I’ve written above).
I feel also that there might be a discrepancy between who I think of when I think of prosaic alignment researchers and what the category means in general/to most people here? My category mostly includes AF posters, people from a bunch of places like EleutherAI/OpenAI/DeepMind/Anthropic/Redwood and people from CHAI and FHI. I expect most of these people to actually have read the sequences, and tried to understand MIRI’s perspective. Maybe someone could point out a list of other places where prosaic alignment research is being done that I’m missing, especially places where people probably haven’t read the sequences? Or maybe I’m over estimating how many of the people in the places I mentioned have read the sequences?
I don’t mean to say that there’s critique of prosaic alignment specifically in the sequences. Rather, a lot of the generators of the Yudkowsky-esque worldview are in there. (That is how the sequences work: it’s not about arguing specific ideas around alignment, it’s about explaining enough of the background frames and generators that the argument becomes unnecessary. “Raise the sanity waterline” and all that.)
For instance, just the other day I ran across this:
(The earlier part of the post had a couple embarrassing stories of mistakes Yudkowsky made earlier, which is where the lesson came from.) Reading that, I was like, “man that sure does sound like the Yudkowsky-esque viewpoint on prosaic alignment”.
I think you are overestimating. At the orgs you list, I’d guess at least 25% and probably more than half have not read the sequences. (Low confidence/wide error bars, though.)
Thank you for the links Adam. To clarify, the kind of argument I’m really looking for is something like the following three (hypothetical) examples.
Mesa-optimization is the primary threat model of unaligned AGI systems. Over the next few decades there will be a lot of companies building ML systems that create mesa-optimizers. I think it is within 5 years of current progress that we will understand how ML systems create mesa-optimizers and how to stop it.Therefore I think the current field is adequate for the problem (80%).
When I look at the research we’re outputting, it seems to me to me that we are producing research at a speed and flexibility faster than any comparably sized academic department globally, or the ML industry, and so I am much more hopeful that we’re able to solve our difficult problem before the industry builds an unaligned AGI. I give it a 25% probability, which I suspect is much higher than Eliezer’s.
I basically agree the alignment problem is hard and unlikely to be solved, but I don’t think we have any alternative than the current sorts of work being done, which is a combo of (a) agent foundations work (b) designing theoretical training algorithms (like Paul is) or (c) directly aligning narrowly super intelligent models. I am pretty open to Eliezer’s claim that we will fail but I see no alternative plan to pour resources into.
Whatever you actually think about the field and how it will save the world, say it!
It seems to me that almost all of your the arguments you’ve made work whether the field is a failure or not. The debate here has to pass through whether the field is on-track or not, and we must not sidestep that conversation.
I want to leave this paragraph as social acknowledgment that you mentioned upthread that you’re tired and taking a break, and I want to give you a bunch of social space to not return to this thread for however long you need to take! Slow comments are often the best.
Thanks for the examples, that helps a lot.
I’m glad that I posted my inflammatory comment, if only because exchanging with you and Rob made me actually consider the question of “what is our story to success”, instead of just “are we making progress/creating valuable knowledge”. And the way you two have been casting it is way less aversive to me that the way EY tends to frame it. This is definitely something I want to think more about. :)
Appreciated. ;)
Glad to hear. And yeah, that’s the crux of the issue for me.
! Yay! That’s really great to hear. :)
I’m sympathetic to most of your points.
I have sympathy for the “this feels somewhat contemptuous” reading, but I want to push back a bit on the “EY contemptuously calling nearly everyone fakers” angle, because I think “[thinly] veiled contempt” is an uncharitable reading. He could be simply exasperated about the state of affairs, or wishing people would change their research directions but respect them as altruists for Trying At All, or who knows what? I’d rather not overwrite his intentions with our reactions (although it is mostly the job of the writer to ensure their writing communicates the right information [although part of the point of the website discussion was to speak frankly and bluntly]).