Yeah, the essay (I think correctly) notes that the most significant breakthroughs in biotech come from the small number of “broad measurement tools or techniques that allow precise but generalized or programmable intervention”, which “are so powerful precisely because they cut through intrinsic complexity and data limitations, directly increasing our understanding and control”.
Why then only such systems limited to the biological domain? Even if it does end up being true that scientific and technological progress is substantially bottlenecked on real-life experimentation, where even AIs that can extract many more bits from the same observations than humans still suffer from substantial serial dependencies with no meaningful “shortcuts”, it still seems implausible that we don’t get to nanotech relatively quickly, if it’s physically realizable. And then that nanotech unblocks the rate of experimentation. (If you’re nanotech skeptical, human-like robots seem sufficient as actuators to speed up real-life experimentation by at least an order of magnitude compared to needing to work through humans, and work on those is making substantial progress.)
If Dario thinks that progress will cap out at some level due to humans intentionally slowing down, it seems good to say this.
Footnote 2 maybe looks like a hint in this direction if you squint, but Dario spent a decent chunk of the essay bracketing outcomes he thought were non-default and would need to be actively steered towards, so it’s interesting that he didn’t explicitly list those (non-tame futures) as a type of of outcome that he’d want to actively steer away from.
My answer to this question of why Dario thought this:
Yeah, the essay (I think correctly) notes that the most significant breakthroughs in biotech come from the small number of “broad measurement tools or techniques that allow precise but generalized or programmable intervention”, which “are so powerful precisely because they cut through intrinsic complexity and data limitations, directly increasing our understanding and control”.
Why then only such systems limited to the biological domain?
Is because this is the area that Dario has most experience in being a biologist, and he freely admits to having limited expertise here:
I am fortunate to have professional experience in both biology and neuroscience, and I am an informed amateur in the field of economic development, but I am sure I will get plenty of things wrong. One thing writing this essay has made me realize is that it would be valuable to bring together a group of domain experts (in biology, economics, international relations, and other areas) to write a much better and more informed version of what I’ve produced here. It’s probably best to view my efforts here as a starting prompt for that group.
I also believe these set of reasons come into play here:
Avoid grandiosity. I am often turned off by the way many AI risk public figures (not to mention AI company leaders) talk about the post-AGI world, as if it’s their mission to single-handedly bring it about like a prophet leading their people to salvation. I think it’s dangerous to view companies as unilaterally shaping the world, and dangerous to view practical technological goals in essentially religious terms.
Avoid “sci-fi” baggage. Although I think most people underestimate the upside of powerful AI, the small community of people who do discuss radical AI futures often does so in an excessively “sci-fi” tone (featuring e.g. uploaded minds, space exploration, or general cyberpunk vibes). I think this causes people to take the claims less seriously, and to imbue them with a sort of unreality. To be clear, the issue isn’t whether the technologies described are possible or likely (the main essay discusses this in granular detail)—it’s more that the “vibe” connotatively smuggles in a bunch of cultural baggage and unstated assumptions about what kind of future is desirable, how various societal issues will play out, etc. The result often ends up reading like a fantasy for a narrow subculture, while being off-putting to most people.
Yeah, the essay (I think correctly) notes that the most significant breakthroughs in biotech come from the small number of “broad measurement tools or techniques that allow precise but generalized or programmable intervention”, which “are so powerful precisely because they cut through intrinsic complexity and data limitations, directly increasing our understanding and control”.
Why then only such systems limited to the biological domain? Even if it does end up being true that scientific and technological progress is substantially bottlenecked on real-life experimentation, where even AIs that can extract many more bits from the same observations than humans still suffer from substantial serial dependencies with no meaningful “shortcuts”, it still seems implausible that we don’t get to nanotech relatively quickly, if it’s physically realizable. And then that nanotech unblocks the rate of experimentation. (If you’re nanotech skeptical, human-like robots seem sufficient as actuators to speed up real-life experimentation by at least an order of magnitude compared to needing to work through humans, and work on those is making substantial progress.)
Footnote 2 maybe looks like a hint in this direction if you squint, but Dario spent a decent chunk of the essay bracketing outcomes he thought were non-default and would need to be actively steered towards, so it’s interesting that he didn’t explicitly list those (non-tame futures) as a type of of outcome that he’d want to actively steer away from.
My answer to this question of why Dario thought this:
Is because this is the area that Dario has most experience in being a biologist, and he freely admits to having limited expertise here:
I also believe these set of reasons come into play here: