(1) I think very dangerous AGI will come eventually, and that we’re extremely not ready for it, and that we’re making slow but steady progress right now on getting ready, and so I’d much rather it come later than sooner.
(2) It’s hard to be super-confident, but I think the “critical path” towards AGI mostly looks like “research” right now, and mostly looks like “scaling up known techniques” starting in the future but not yet.
(3) I think the big direct effect of a moratorium on “scaling up” is substitution: more “research” than otherwise—which is the opposite of what I want. (E.g. “Oh, we’re only allowed X compute / data / params? Cool—let’s figure out how to get more capabilities out of X compute / data / params!!”)
(4) I’m sympathetic to the idea that some of the indirect effects of the FLI thing might align with my goals, like “practice for later” or “sending a message” or “reducing AI investments” etc. I’m also sympathetic to the fact that a lot of reasonable people in my field disagree with me on (2). But those aren’t outweighing (3) for me. So for my part, I’m not signing, but I’m also not judging those who do.
(5) While I’m here, I want to more generally advocate that, insofar as we’re concerned about “notkilleveryoneism” (and we should be), we should be talking more about “research” and less about “deployment”. I think that “things that happen within the 4 walls of an R&D department, and on arxiv & github” are the main ingredients in how soon dangerous AGI arrives; and likewise, if someday an AI gets out of control and kills everyone, I expect this specific AI to have never been deliberately “deployed” to the public. This makes me less enthusiastic about certain proposed regulations than some other people in my field, and relatively more enthusiastic about e.g. outreach & dialog with AI researchers […which might lead to them (A) helping with the alignment problem and (B) not contributing to AGI-relevant research & tooling (or at least not publishing / open-sourcing it)].
(cross-posting my take from twitter)
(1) I think very dangerous AGI will come eventually, and that we’re extremely not ready for it, and that we’re making slow but steady progress right now on getting ready, and so I’d much rather it come later than sooner.
(2) It’s hard to be super-confident, but I think the “critical path” towards AGI mostly looks like “research” right now, and mostly looks like “scaling up known techniques” starting in the future but not yet.
(3) I think the big direct effect of a moratorium on “scaling up” is substitution: more “research” than otherwise—which is the opposite of what I want. (E.g. “Oh, we’re only allowed X compute / data / params? Cool—let’s figure out how to get more capabilities out of X compute / data / params!!”)
(4) I’m sympathetic to the idea that some of the indirect effects of the FLI thing might align with my goals, like “practice for later” or “sending a message” or “reducing AI investments” etc. I’m also sympathetic to the fact that a lot of reasonable people in my field disagree with me on (2). But those aren’t outweighing (3) for me. So for my part, I’m not signing, but I’m also not judging those who do.
(5) While I’m here, I want to more generally advocate that, insofar as we’re concerned about “notkilleveryoneism” (and we should be), we should be talking more about “research” and less about “deployment”. I think that “things that happen within the 4 walls of an R&D department, and on arxiv & github” are the main ingredients in how soon dangerous AGI arrives; and likewise, if someday an AI gets out of control and kills everyone, I expect this specific AI to have never been deliberately “deployed” to the public. This makes me less enthusiastic about certain proposed regulations than some other people in my field, and relatively more enthusiastic about e.g. outreach & dialog with AI researchers […which might lead to them (A) helping with the alignment problem and (B) not contributing to AGI-relevant research & tooling (or at least not publishing / open-sourcing it)].
Further reading: https://80000hours.org/problem-profiles/artificial-intelligence/ & https://alignmentforum.org/posts/rgPxEKFBLpLqJpMBM/response-to-blake-richards-agi-generality-alignment-and-loss & https://www.alignmentforum.org/posts/MCWGCyz2mjtRoWiyP/endgame-safety-for-agi