There are four key differences between this and the current AI situation that I think makes this perspective pretty outdated:
AIs are made out of ML, so we have very fine-grained control over how we train them and modify them for deployment, unlike animals which have unpredictable biological drives and long feedback loops.
By now, AIs are obviously developing generalized capabilities. Rather than arguments over whether AIs will ever be superintelligent, the bulk of the discourse is over whether they will supercharge economic growth or cause massive job loss and how quickly.
There are at least 10 companies that could build superintelligence within 10ish years and their CEOs are all high on motivated reasoning, so stopping is infeasible
Current evidence points to takeoff being continuous and merely very fast—even automating AI R&D won’t cause the hockey-stick graph that human civilization had
Re continuous takeoff, you could argue that human takeoff was continuous, or only mildly discontinuous, just very fast, and at any rate it could well be discontinuous relative to your OODA loop/the variables you tracked, so unfortunately I think the continuity of the takeoff is less relevant than people thought (it does matter for alignment, but not for governance of AI):
re: 1) I don’t think we do have fine-grained control over the outcome of the training of LLMs and other ML systems, which is what really matters. See recent emergent self-preservation behavior.
re: 2) I’m saying that I think those arguments are distractions from the much more important one of x-risk. But sure, this metaphor doesn’t address economic impact aside from “I think we could gain a lot from cooperating with them to hunt fish”
re: 3) I’m not sure I see the relevance. The unnamed audience member saying “I say we keep giving them nootropics” is meant to represent AI researchers who aren’t actively involving themselves in the x-risk debate continuing to make progress on AI capabilities while the arguers talk to each other
re: 4) It sounds like you’re comparing something like a log graph of human capability to a linear graph of AI capability. That is, I don’t think that AI will take tens of thousands of years to develop the way human civilization did. My 50% confidence interval on when the Singularity will happen is 2026-2031, and my 95% confidence only extends to maybe 2100. I expect there to be more progress in AI development in 2025-2026 than in 1980-2020
There are four key differences between this and the current AI situation that I think makes this perspective pretty outdated:
AIs are made out of ML, so we have very fine-grained control over how we train them and modify them for deployment, unlike animals which have unpredictable biological drives and long feedback loops.
By now, AIs are obviously developing generalized capabilities. Rather than arguments over whether AIs will ever be superintelligent, the bulk of the discourse is over whether they will supercharge economic growth or cause massive job loss and how quickly.
There are at least 10 companies that could build superintelligence within 10ish years and their CEOs are all high on motivated reasoning, so stopping is infeasible
Current evidence points to takeoff being continuous and merely very fast—even automating AI R&D won’t cause the hockey-stick graph that human civilization had
Re continuous takeoff, you could argue that human takeoff was continuous, or only mildly discontinuous, just very fast, and at any rate it could well be discontinuous relative to your OODA loop/the variables you tracked, so unfortunately I think the continuity of the takeoff is less relevant than people thought (it does matter for alignment, but not for governance of AI):
https://www.lesswrong.com/posts/cHJxSJ4jBmBRGtbaE/continuity-assumptions#DHvagFyKb9hiwJKRC
https://www.lesswrong.com/posts/cHJxSJ4jBmBRGtbaE/continuity-assumptions#tXKnoooh6h8Cj8Tpx
Agree that AI takeoff could likely be faster than our OODA loop.
re: 1) I don’t think we do have fine-grained control over the outcome of the training of LLMs and other ML systems, which is what really matters. See recent emergent self-preservation behavior.
re: 2) I’m saying that I think those arguments are distractions from the much more important one of x-risk. But sure, this metaphor doesn’t address economic impact aside from “I think we could gain a lot from cooperating with them to hunt fish”
re: 3) I’m not sure I see the relevance. The unnamed audience member saying “I say we keep giving them nootropics” is meant to represent AI researchers who aren’t actively involving themselves in the x-risk debate continuing to make progress on AI capabilities while the arguers talk to each other
re: 4) It sounds like you’re comparing something like a log graph of human capability to a linear graph of AI capability. That is, I don’t think that AI will take tens of thousands of years to develop the way human civilization did. My 50% confidence interval on when the Singularity will happen is 2026-2031, and my 95% confidence only extends to maybe 2100. I expect there to be more progress in AI development in 2025-2026 than in 1980-2020