more researchers should backchain from “how do I make AGI timelines longer
Like you mention, “end time” seems (much) more valuable than earlier time. But the framing here, as well as the broader framing of “buying time,” collapses that distinction (by just using “time” as the metric). So I’d suggest more heavily emphasizing buying end time.
One potential response is: it doesn’t matter; both framings suggest the same interventions. But that seems wrong. For example, slowing down AI progress now seems like it’d mostly buy “pre-end time” (potentially by burning “end time,” if the way we’re slowing down is by safety-conscious labs burning their leads), while setting up standards/regulations/coordination for mitigating racing/unilateralist dynamics at end time buys us end time.
Like you mention, “end time” seems (much) more valuable than earlier time. But the framing here, as well as the broader framing of “buying time,” collapses that distinction (by just using “time” as the metric). So I’d suggest more heavily emphasizing buying end time.
One potential response is: it doesn’t matter; both framings suggest the same interventions. But that seems wrong. For example, slowing down AI progress now seems like it’d mostly buy “pre-end time” (potentially by burning “end time,” if the way we’re slowing down is by safety-conscious labs burning their leads), while setting up standards/regulations/coordination for mitigating racing/unilateralist dynamics at end time buys us end time.