Thanks for this thoughtful review! Below are my thoughts:
--I agree that this post contributes to the forecasting discussion in the way you mention. However, that’s not the main way I think it contributes. I think the main way it contributes is that it operationalizes a big timelines crux & forcefully draws people’s attention to it. I wrote this post after reading Ajeya’s Bio Anchors report carefully many times, annotating it, etc. and starting several gdocs with various disagreements. I found in doing so that some disagreements didn’t change the bottom line much, while others were huge cruxes. This one was the biggest crux of all, so I discarded the rest and focused on getting this out there. And I didn’t even have the energy/time to really properly argue for my side of the crux—there’s so much more I could say!--so I contented myself with having the conclusion be “here’s the crux, y’all should think and argue about this instead of the other stuff.
”—I agree that there’s a lot more I could have done to argue that OmegaStar, Amp(GPT-7), etc. would be transformative. I could have talked about scaling laws, about how AlphaStar is superhuman at Starcraft and therefore OmegaStar should be superhuman at all games, etc. I could have talked about how Amp(GPT-7) combines the strengths of neural nets and language models with the strengths of traditional software. Instead I just described how they were trained, and left it up to the reader to draw conclusions. This was mainly because of space/time constraints (it’s a long post already; I figured I could always follow up later, or in the comments. I had hoped that people would reply with objections to specific designs, e.g. “OmegaStar won’t work because X” and then I could have a conversation in the comments about it. A secondary reason was infohazard stuff—it was already a bit iffy for me to be sketching AGI designs on the internet, even though I was careful to target +12 OOMs instead of +6; it would have been worse if I had also forcefully argued that the designs would succeed in creating something super powerful. (This is also a partial response to your critique about the numbers being too big, too fun—I could have made much the same point with +6 OOMs instead of +12 (though not with, say, +3 OOMs, those numbers would be too small) but I wanted to put an extra smidgen of distance between the post and ‘here’s a bunch of ideas for how to build AGI soon.’)
Anyhow, so the post you say you would love to read, I too would love to read. I’d love to write it as well. It could be a followup to this one. That said, to be honest I probably don’t have time to devote to making it, so I hope someone else does instead! (Or, equally good IMO, would be someone writing a post explaining why none of the 5 designs I sketched would work. Heck I think I’d like that even more, since it would tell me something I don’t already think I know.)
Thanks for this thoughtful review! Below are my thoughts:
--I agree that this post contributes to the forecasting discussion in the way you mention. However, that’s not the main way I think it contributes. I think the main way it contributes is that it operationalizes a big timelines crux & forcefully draws people’s attention to it. I wrote this post after reading Ajeya’s Bio Anchors report carefully many times, annotating it, etc. and starting several gdocs with various disagreements. I found in doing so that some disagreements didn’t change the bottom line much, while others were huge cruxes. This one was the biggest crux of all, so I discarded the rest and focused on getting this out there. And I didn’t even have the energy/time to really properly argue for my side of the crux—there’s so much more I could say!--so I contented myself with having the conclusion be “here’s the crux, y’all should think and argue about this instead of the other stuff.
”—I agree that there’s a lot more I could have done to argue that OmegaStar, Amp(GPT-7), etc. would be transformative. I could have talked about scaling laws, about how AlphaStar is superhuman at Starcraft and therefore OmegaStar should be superhuman at all games, etc. I could have talked about how Amp(GPT-7) combines the strengths of neural nets and language models with the strengths of traditional software. Instead I just described how they were trained, and left it up to the reader to draw conclusions. This was mainly because of space/time constraints (it’s a long post already; I figured I could always follow up later, or in the comments. I had hoped that people would reply with objections to specific designs, e.g. “OmegaStar won’t work because X” and then I could have a conversation in the comments about it. A secondary reason was infohazard stuff—it was already a bit iffy for me to be sketching AGI designs on the internet, even though I was careful to target +12 OOMs instead of +6; it would have been worse if I had also forcefully argued that the designs would succeed in creating something super powerful. (This is also a partial response to your critique about the numbers being too big, too fun—I could have made much the same point with +6 OOMs instead of +12 (though not with, say, +3 OOMs, those numbers would be too small) but I wanted to put an extra smidgen of distance between the post and ‘here’s a bunch of ideas for how to build AGI soon.’)
Anyhow, so the post you say you would love to read, I too would love to read. I’d love to write it as well. It could be a followup to this one. That said, to be honest I probably don’t have time to devote to making it, so I hope someone else does instead! (Or, equally good IMO, would be someone writing a post explaining why none of the 5 designs I sketched would work. Heck I think I’d like that even more, since it would tell me something I don’t already think I know.)