This podcast goes over the <@biological anchors framework@>(@Draft report on AI timelines@), as well as [three](https://arxiv.org/abs/1705.08807) <@other@>(@Modeling the Human Trajectory@) <@approaches@>(@Semi-informative priors over AI timelines@) to forecasting AI timelines and a post on <@aligning narrowly superhuman models@>(@The case for aligning narrowly superhuman models@). I recommend reading my summaries of those works individually to find out what they are. This podcast can help contextualize all of the work, adding in details that you wouldn’t naturally see if you just read the reports or my summaries of them.
For example, I learned that there is a distinction between noise and effective horizon length. To the extent that your gradients are noisy, you can simply fix the problem by increasing your batch size (which can be done in parallel). However, the effective horizon length is measuring how many _sequential_ steps you have to take before you get feedback on how well you’re doing. The two are separated in the bio anchors work because the author wanted to impose specific beliefs on the effective horizon length, but was happy to continue extrapolating from current examples for noise.
Planned summary for the Alignment Newsletter: