I think as of early this year (like, January/February, before I saw a version of this doc) I could have produced a pretty similar list to this one. I definitely would not derive it from the empty string in the closest world-without-Eliezer; I’m unsure how much I’d pay attention to AI alignment at all in that world. I’d very likely be working on agent foundations in that world, but possibly in the context of biology or AI capabilities rather than alignment. Arguments about AI foom and doom were obviously-to-me correct once I paid attention to them at all, but not something I’d have paid attention to on my own without someone pointing them out.
Some specifics about kind-of-doc I could have written early this year
The framing around pivotal acts specifically was new-to-me when the late 2021 MIRI conversations were published. Prior to that, I’d have had to talk about how weak wish-granters are safe but not actually useful, and if we want safe AI which actually grants big wishes then we have to deal with the safety problems. Pivotal acts framing simplifies that part of the argument a lot by directly establishing a particular “big” capability which is necessary.
By early this year, I think would have generated pretty similar points to basically everything in the post if I were trying to be really comprehensive. (In practice, writing a post like this, I would go for more unifying structure and thought-generators rather than comprehensiveness; I’d use the individual failure modes more as examples of their respective generators.)
In my traversal-order of barriers, the hard conceptual barriers for which we currently have no solution even in principle (like e.g. 16-19) would get a lot more weight and detail; I spend less time thinking about what-I-mentally-categorize-as “the obvious things which go wrong with stupid approaches” (20, 21, 25-36).
Just within the past week, this post on interpretability was one which would probably turn into a point on my equivalent of Eliezer’s list.
The earlier points are largely facts-about-the-world (e.g. 1, 2, 7-9, 12-15). For many of these, I would cite different evidence, although the conclusions remain the same. True facts are, as a general rule, overdetermined by evidence; there are many paths to them, and I didn’t always follow the same paths Eliezer does here.
A few points I think are wrong (notably 18, 22, 24 to a limited extent), but are correct relative to the knowledge/models which most proposals actually leverage. The loopholes there are things which you do need pretty major foundational/conceptual work to actually steer through.
I would definitely have generated some similar rants at the end, though of course not identical.
One example: just yesterday I was complaining about how people seem to generate alignment proposals via a process of (1) come up with neat idea, (2) come up with some conditions under which that idea would maybe work (or at least not obviously fail in any of the ways the person knows to look for), (3) either claim that “we just don’t know” whether the conditions hold (without any serious effort to look for evidence), or directly look for evidence that they hold. Pretty standard bottom line failure.
I did briefly consider writing something along these lines after Eliezer made a similar comment to 39 in the Late 2021 MIRI Conversations. But as Kokotajlo guessed, I did not think that was even remotely close to the highest-value use of my time. It would probably take me a full month’s work to do it right, and the list just isn’t as valuable as my last month of progress. Or the month before that. Or the month before that.
I’m curious about why you decided it wasn’t worth your time.
Going from the post itself, the case for publishing it goes something like “the whole field of AI Alignment is failing to produce useful work because people aren’t engaging with what’s actually hard about the problem and are ignoring all the ways their proposals are doomed; perhaps yelling at them via this post might change some of that.”
Accepting the premises (which I’m inclined to), trying to get the entire field to correct course seems actually pretty valuable, maybe even worth a month of your time, now that I think about it.
First and foremost, I have been making extraordinarily rapid progress in the last few months, though most of that is not yet publicly visible.
Second, a large part of why people pour effort into not-very-useful work is that the not-very-useful work is tractable. Useless, but at least you can make progress on the useless thing! Few people really want to work on problems which are actually Hard, so people will inevitably find excuses to do easy things instead. As Eliezer himself complains, writing the list just kicks the can down the road; six months later people will have a new set of bad ideas with giant gaping holes in them. The real goal is to either:
produce people who will identify the holes in their own schemes, repeatedly, until they converge to work on things which are actually useful despite being Hard, or
get enough of a paradigm in place that people can make legible progress on actually-useful things without doing anything Hard.
I have recently started testing out methods for the former, but it’s the sort of thing which starts out with lots of tests on individuals or small groups to see what works. The latter, of course, is largely what my technical research is aimed at in the medium term.
(I also note that there will always be at least some need for people doing the Hard things, even once a paradigm is established.)
In the short term, if people want to identify the holes in their own schemes and converge to work on actually useful things, I think the “builder/breaker” methodology that Paul uses in the ELK doc is currently a good starting point.
Well, it’s the Law of Continued Failure, as Eliezer termed it himself, no? There’s already been a lot of rants about the real problems of alignment and how basically no-one focuses on them, most of them Eliezer-written as well. The sort of person who wasn’t convinced/course-corrected by previous scattered rants isn’t going to be course-corrected by a giant post compiling all the rants in one place. Someone to whom this post would be of use is someone who’ve already absorbed all the information contained in it from other sources; someone who can already write it up on their own.
The picture may not be quite as grim as that, but yeah I can see how writing it would not be anyone’s top priority.
I definitely would not derive it from the empty string in the closest world-without-Eliezer; I’m unsure how much I’d pay attention to AI alignment at all in that world. I’d very likely be working on agent foundations in that world, but possibly in the context of biology or AI capabilities rather than alignment. Arguments about AI foom and doom were obviously-to-me correct once I paid attention to them at all, but not something I’d have paid attention to on my own without someone pointing them out.
I don’t think he does this; that’d be ridiculous.
“I can’t find any good alignment researchers. The only way I know how to find them is by explaining that the field is important, using arguments for AI risk and doomerism, which means they didn’t come up with those arguments on their own, and thus cannot be ‘worthy’.”
Doesn’t do what? I understand Eliezer to be saying that he figured out AI risk via thinking things through himself (e.g., writing a story that involved outcome pumps; reflecting on orthogonality and instrumental convergence; etc.), rather than being argued into it by someone else who was worried about AI risk. If Eliezer didn’t do that, there would still presumably be someone prior to him who did that, since conclusions and ideas have to enter the world somehow. So I’m not understanding what you’re modeling as ridiculous.
(I don’t know that foom falls into the same category; did Vinge or I.J. Good’s arguments help persuade EY here?)
“I can’t find any good alignment researchers. The only way I know how to find them is by explaining that the field is important, using arguments for AI risk and doomerism, which means they didn’t come up with those arguments on their own, and thus cannot be ‘worthy’.”
This is phrased in a way that’s meant to make the standard sound unfair or impossible. But it seems like a perfectly fine Bayesian update:
There’s no logical necessity that we live in a world that lacks dozens of independent “Eliezers” who all come up with this stuff and write about it. I think Nick Bostrom had some AI risk worries independently of Eliezer, so gets at least partial credit on this dimension. Others who had thoughts along these lines independently include Norbert Wiener and I.J. Good (timeline with more examples).
You could imagine a world that has much more independent discovery on this topic, or one where all the basic concepts of AI risk were being widely discussed and analyzed back in the 1960s. It’s a fair Bayesian update to note that we don’t live in worlds that are anything like that, even if it’s not a fair test of individual ability for people who, say, encountered all of Eliezer’s writing as soon as they even learned about the concept of AI.
(I could also imagine a world where more of the independent discoveries result in serious research programs being launched, rather than just resulting in someone writing a science fiction story and then moving on with a shrug!)
Your summary leaves out that “coming up with stuff without needing to be argued into it” is a matter of degree, and that there are many important claims here beyond just ‘AI risk is worth paying attention to at all’.
It’s logically possible to live in a world where people need to have AI risk brought to their attention, but then they immediately “get it” when they hear the two-sentence version, rather than needing an essay-length or seven-essay-length explanation. To the extent we live in a world where many key players need the full essay, and many other smart, important people don’t even “get it” after hours of conversation (e.g., LeCun), that’s a negative update about humanity’s odds of success.
Similarly, it’s logically possible to live in a world where people needed persuading to accept the core ‘AI risk’ thing, but then they have an easy time generating all the other important details and subclaims themselves. “Maximum doom” and “minimum doom” aren’t the only options; the exact level of doominess matters a lot.
E.g., my Eliezer-model thinks that nearly all public discussion of ‘practical implications of logical decision theory’ outside of MIRI (e.g., discussion of humans trying to acausally trade with superintelligences) has been utterly awful. If instead this discourse had managed to get a ton of stuff right even though EY wasn’t releasing much of his own detailed thoughts about acausal trade, then that would have been an important positive update.
Eliezer spent years alluding to his AI risk concerns on Overcoming Bias without writing them all up, and deliberately withheld many related arguments for years (including as recently as last year) in order to test whether anyone else would generate them independently. It isn’t the case that humanity had to passively wait to hear the full argument from Eliezer before it was permitted for them to start thinking and writing about this stuff.
Doesn’t do what? I understand Eliezer to be saying that he figured out AI risk via thinking things through himself (e.g., writing a story that involved outcome pumps; reflecting on orthogonality and instrumental convergence; etc.), rather than being argued into it by someone else who was worried about AI risk. If Eliezer didn’t do that, there would still presumably be someone prior to him who did that, since conclusions and ideas have to enter the world somehow. So I’m not understanding what you’re modeling as ridiculous.
My understanding of the history is that Eliezer did not realize the importance of alignment at first, and that he only did so later after arguing about it online with people like Nick Bostrom. See e.g. this thread. I don’t know enough of the history here, but it also seems logically possible that Bostrom could have, say, only realized the importance of alignment after conversing with other people who also didn’t realize the importance of alignment. In that case, there might be a “bubble” of humans who together satisfy the null string criterion, but no single human who does.
The null string criterion does seem a bit silly nowadays since I think the people who would have satisfied it would have sooner read about AI risk on e.g. LessWrong. So they wouldn’t even have the chance to live to age ~21 to see if they spontaneously invent the ideas.
Look, maybe you’re right. But I’m not good at complicated reasoning; I can’t confidently verify these results you’re giving me. My brain is using a much simpler heuristic that says: look at all of these other fields with core insights that could have been made way earlier than they did. Look at Newton! Look at Darwin! Certainly game theorists could have come along a lot sooner. But that doesn’t mean only the founder of these fields is the one Great enough to make progress, so, what are you saying, exactly?
I think as of early this year (like, January/February, before I saw a version of this doc) I could have produced a pretty similar list to this one. I definitely would not derive it from the empty string in the closest world-without-Eliezer; I’m unsure how much I’d pay attention to AI alignment at all in that world. I’d very likely be working on agent foundations in that world, but possibly in the context of biology or AI capabilities rather than alignment. Arguments about AI foom and doom were obviously-to-me correct once I paid attention to them at all, but not something I’d have paid attention to on my own without someone pointing them out.
Some specifics about kind-of-doc I could have written early this year
The framing around pivotal acts specifically was new-to-me when the late 2021 MIRI conversations were published. Prior to that, I’d have had to talk about how weak wish-granters are safe but not actually useful, and if we want safe AI which actually grants big wishes then we have to deal with the safety problems. Pivotal acts framing simplifies that part of the argument a lot by directly establishing a particular “big” capability which is necessary.
By early this year, I think would have generated pretty similar points to basically everything in the post if I were trying to be really comprehensive. (In practice, writing a post like this, I would go for more unifying structure and thought-generators rather than comprehensiveness; I’d use the individual failure modes more as examples of their respective generators.)
In my traversal-order of barriers, the hard conceptual barriers for which we currently have no solution even in principle (like e.g. 16-19) would get a lot more weight and detail; I spend less time thinking about what-I-mentally-categorize-as “the obvious things which go wrong with stupid approaches” (20, 21, 25-36).
Just within the past week, this post on interpretability was one which would probably turn into a point on my equivalent of Eliezer’s list.
The earlier points are largely facts-about-the-world (e.g. 1, 2, 7-9, 12-15). For many of these, I would cite different evidence, although the conclusions remain the same. True facts are, as a general rule, overdetermined by evidence; there are many paths to them, and I didn’t always follow the same paths Eliezer does here.
A few points I think are wrong (notably 18, 22, 24 to a limited extent), but are correct relative to the knowledge/models which most proposals actually leverage. The loopholes there are things which you do need pretty major foundational/conceptual work to actually steer through.
I would definitely have generated some similar rants at the end, though of course not identical.
One example: just yesterday I was complaining about how people seem to generate alignment proposals via a process of (1) come up with neat idea, (2) come up with some conditions under which that idea would maybe work (or at least not obviously fail in any of the ways the person knows to look for), (3) either claim that “we just don’t know” whether the conditions hold (without any serious effort to look for evidence), or directly look for evidence that they hold. Pretty standard bottom line failure.
I did briefly consider writing something along these lines after Eliezer made a similar comment to 39 in the Late 2021 MIRI Conversations. But as Kokotajlo guessed, I did not think that was even remotely close to the highest-value use of my time. It would probably take me a full month’s work to do it right, and the list just isn’t as valuable as my last month of progress. Or the month before that. Or the month before that.
I’m curious about why you decided it wasn’t worth your time.
Going from the post itself, the case for publishing it goes something like “the whole field of AI Alignment is failing to produce useful work because people aren’t engaging with what’s actually hard about the problem and are ignoring all the ways their proposals are doomed; perhaps yelling at them via this post might change some of that.”
Accepting the premises (which I’m inclined to), trying to get the entire field to correct course seems actually pretty valuable, maybe even worth a month of your time, now that I think about it.
First and foremost, I have been making extraordinarily rapid progress in the last few months, though most of that is not yet publicly visible.
Second, a large part of why people pour effort into not-very-useful work is that the not-very-useful work is tractable. Useless, but at least you can make progress on the useless thing! Few people really want to work on problems which are actually Hard, so people will inevitably find excuses to do easy things instead. As Eliezer himself complains, writing the list just kicks the can down the road; six months later people will have a new set of bad ideas with giant gaping holes in them. The real goal is to either:
produce people who will identify the holes in their own schemes, repeatedly, until they converge to work on things which are actually useful despite being Hard, or
get enough of a paradigm in place that people can make legible progress on actually-useful things without doing anything Hard.
I have recently started testing out methods for the former, but it’s the sort of thing which starts out with lots of tests on individuals or small groups to see what works. The latter, of course, is largely what my technical research is aimed at in the medium term.
(I also note that there will always be at least some need for people doing the Hard things, even once a paradigm is established.)
In the short term, if people want to identify the holes in their own schemes and converge to work on actually useful things, I think the “builder/breaker” methodology that Paul uses in the ELK doc is currently a good starting point.
Well, it’s the Law of Continued Failure, as Eliezer termed it himself, no? There’s already been a lot of rants about the real problems of alignment and how basically no-one focuses on them, most of them Eliezer-written as well. The sort of person who wasn’t convinced/course-corrected by previous scattered rants isn’t going to be course-corrected by a giant post compiling all the rants in one place. Someone to whom this post would be of use is someone who’ve already absorbed all the information contained in it from other sources; someone who can already write it up on their own.
The picture may not be quite as grim as that, but yeah I can see how writing it would not be anyone’s top priority.
I don’t think he does this; that’d be ridiculous.
“I can’t find any good alignment researchers. The only way I know how to find them is by explaining that the field is important, using arguments for AI risk and doomerism, which means they didn’t come up with those arguments on their own, and thus cannot be ‘worthy’.”
Doesn’t do what? I understand Eliezer to be saying that he figured out AI risk via thinking things through himself (e.g., writing a story that involved outcome pumps; reflecting on orthogonality and instrumental convergence; etc.), rather than being argued into it by someone else who was worried about AI risk. If Eliezer didn’t do that, there would still presumably be someone prior to him who did that, since conclusions and ideas have to enter the world somehow. So I’m not understanding what you’re modeling as ridiculous.
(I don’t know that foom falls into the same category; did Vinge or I.J. Good’s arguments help persuade EY here?)
This is phrased in a way that’s meant to make the standard sound unfair or impossible. But it seems like a perfectly fine Bayesian update:
There’s no logical necessity that we live in a world that lacks dozens of independent “Eliezers” who all come up with this stuff and write about it. I think Nick Bostrom had some AI risk worries independently of Eliezer, so gets at least partial credit on this dimension. Others who had thoughts along these lines independently include Norbert Wiener and I.J. Good (timeline with more examples).
You could imagine a world that has much more independent discovery on this topic, or one where all the basic concepts of AI risk were being widely discussed and analyzed back in the 1960s. It’s a fair Bayesian update to note that we don’t live in worlds that are anything like that, even if it’s not a fair test of individual ability for people who, say, encountered all of Eliezer’s writing as soon as they even learned about the concept of AI.
(I could also imagine a world where more of the independent discoveries result in serious research programs being launched, rather than just resulting in someone writing a science fiction story and then moving on with a shrug!)
Your summary leaves out that “coming up with stuff without needing to be argued into it” is a matter of degree, and that there are many important claims here beyond just ‘AI risk is worth paying attention to at all’.
It’s logically possible to live in a world where people need to have AI risk brought to their attention, but then they immediately “get it” when they hear the two-sentence version, rather than needing an essay-length or seven-essay-length explanation. To the extent we live in a world where many key players need the full essay, and many other smart, important people don’t even “get it” after hours of conversation (e.g., LeCun), that’s a negative update about humanity’s odds of success.
Similarly, it’s logically possible to live in a world where people needed persuading to accept the core ‘AI risk’ thing, but then they have an easy time generating all the other important details and subclaims themselves. “Maximum doom” and “minimum doom” aren’t the only options; the exact level of doominess matters a lot.
E.g., my Eliezer-model thinks that nearly all public discussion of ‘practical implications of logical decision theory’ outside of MIRI (e.g., discussion of humans trying to acausally trade with superintelligences) has been utterly awful. If instead this discourse had managed to get a ton of stuff right even though EY wasn’t releasing much of his own detailed thoughts about acausal trade, then that would have been an important positive update.
Eliezer spent years alluding to his AI risk concerns on Overcoming Bias without writing them all up, and deliberately withheld many related arguments for years (including as recently as last year) in order to test whether anyone else would generate them independently. It isn’t the case that humanity had to passively wait to hear the full argument from Eliezer before it was permitted for them to start thinking and writing about this stuff.
My understanding of the history is that Eliezer did not realize the importance of alignment at first, and that he only did so later after arguing about it online with people like Nick Bostrom. See e.g. this thread. I don’t know enough of the history here, but it also seems logically possible that Bostrom could have, say, only realized the importance of alignment after conversing with other people who also didn’t realize the importance of alignment. In that case, there might be a “bubble” of humans who together satisfy the null string criterion, but no single human who does.
The null string criterion does seem a bit silly nowadays since I think the people who would have satisfied it would have sooner read about AI risk on e.g. LessWrong. So they wouldn’t even have the chance to live to age ~21 to see if they spontaneously invent the ideas.
Look, maybe you’re right. But I’m not good at complicated reasoning; I can’t confidently verify these results you’re giving me. My brain is using a much simpler heuristic that says: look at all of these other fields with core insights that could have been made way earlier than they did. Look at Newton! Look at Darwin! Certainly game theorists could have come along a lot sooner. But that doesn’t mean only the founder of these fields is the one Great enough to make progress, so, what are you saying, exactly?