(I work at Anthropic.) My read of the “touch grass” comment is informed a lot by the very next sentences in the essay:
But more importantly, tame is good from a societal perspective. I think there’s only so much change people can handle at once, and the pace I’m describing is probably close to the limits of what society can absorb without extreme turbulence.
which I read as saying something like “It’s plausible that things could go much faster than this, but as a prediction about what will actually happen, humanity as a whole probably doesn’t want things to get incredibly crazy so fast, and so we’re likely to see something tamer.” I basically agree with that.
Do Anthropic employees who think less tame outcomes are plausible believe Dario when he says they should “touch grass”?
FWIW, I don’t read the footnote as saying “if you think crazier stuff is possible, touch grass”—I read it as saying “if you think the stuff in this essay is ‘tame’, touch grass”. The stuff in this essay is in fact pretty wild!
That said, I think I have historically underrated questions of how fast things will go given realistic human preferences about the pace of change, and that I might well have updated more in the above direction if I’d chatted with ordinary people about what they want out of the future, so “I needed to touch grass” isn’t a terrible summary. But IMO believing “really crazy scenarios are plausible on short timescales and likely on long timescales” is basically the correct opinion, and to the extent the essay can be read as casting shade on such views it’s wrong to do so. I would have worded this bit of the essay differently.
Re: honesty and signaling, I think it’s true that this essay’s intended audience is not really the crowd that’s already gamed out Mercury disassembly timelines, and its focus is on getting people up to shock level 2 or so rather than SL4, but as far as I know everything in it is an honest reflection of what Dario believes. (I don’t claim any special insight into Dario’s opinions here, just asserting that nothing I’ve seen internally feels in tension with this essay.) Like, it isn’t going out of its way to talk about the crazy stuff, but I don’t read that omission as dishonest.
For my own part:
I think it’s likely that we’ll get nanotech, von Neumann probes, Dyson spheres, computronium planets, acausal trade, etc in the event of aligned AGI.
Whether that stuff happens within the 5-10y timeframe of the essay is much less obvious to me—I’d put it around 30-40% odds conditional on powerful AI from roughly the current paradigm, maybe?
In the other 60-70% of worlds, I think this essay does a fairly good job of describing my 80th percentile expectations (by quality-of-outcome rather than by amount-of-progress).
I would guess that I’m somewhat more Dyson-sphere-pilled than Dario.
I’d be pretty excited to see competing forecasts for what good futures might look like! I found this essay helpful for getting more concrete about my own expectations, and many of my beliefs about good futures look like “X is probably physically possible; X is probably usable-for-good by a powerful civilization; therefore probably we’ll see some X” rather than having any kind of clear narrative about how the path to that point looks.
humanity as a whole probably doesn’t want things to get incredibly crazy so fast, and so we’re likely to see something tamer
Doesn’t this require a pretty strong and unprecedented level of international coordination on stopping an obviously immediately extremely valuable and militarily relevent technology? I think a US backed entente could impose this on the rest of the world, but that would also be an unprecedentedly large effort.
I think this is certainly possible and I hope this level of coordination happens, but I don’t exactly think this is likely in timelines this short.
More minimally, I think the vast majority of readers won’t read this essay and understand that this refers to a world in which there was a massive effort to slow down AI or in which AI was quite surprisingly slow in various ways. (Insofar as this is what Dario thinks.) Dario seems to be trying to make realistic/plausible predictions and often talks about what is “possible” (not what is possible with a desirable level of AI progress), so not mentioning this seems to greatly undermine the prediction aspect of the essay.
I agree it seems unlikely that we’ll see coordination on slowing down before one actor or coalition has a substantial enough lead over other actors that it can enforce such a slowdown unilaterally, but I think it’s reasonably likely that such a lead will arise before things get really insane.
A few different stories under which one might go from aligned “genius in a datacenter” level AI at time t to outcomes merely at the level of weirdness in this essay at t + 5-10y:
The techniques that work to align “genius in a datacenter” level AI don’t scale to wildly superhuman intelligence (eg because they lose some value fidelity from human-generated oversight signals that’s tolerable at one remove but very risky at ten). The alignment problem for serious ASI is quite hard to solve at the mildly superintelligent level, and it genuinely takes a while to work out enough that we can scale up (since the existing AIs, being aligned, won’t design unaligned successors).
If people ask their only-somewhat-superhuman AI what to do next, the AIs say “A bunch of the decisions from this point on hinge on pretty subtle philosophical questions, and frankly it doesn’t seem like you guys have figured all this out super well, have you heard of this thing called a long reflection?” That’s what I’d say if I were a million copies of me in a datacenter advising a 2024-era US government on what to do about Dyson swarms!
A leading actor uses their AI to ensure continued strategic dominance and prevent competing AI projects from posing a meaningful threat. Having done so, they just… don’t really want crazy things to happen really fast, because the actor in question is mostly composed of random politicians or whatever. (I’m personally sympathetic to astronomical waste arguments, but it’s not clear to me that people likely to end up with the levers of power here are.)
The serial iteration times and experimentation loops are just kinda slow and annoying, and mildly-superhuman AI isn’t enough to circumvent experimentation time bottlenecks (some of which end up being relatively slow), and there are stupid zoning restrictions on the land you want to use for datacenters, and some regulation adds lots of mandatory human overhead to some critical iteration loop, etc.
This isn’t a claim that maximal-intelligence-per-cubic-meter ASI initialized in one datacenter would face long delays in making efficient use of its lightcone, just that it might be tough for a not-that-much-better-than-human AGI that’s aligned and trying to respect existing regulations and so on to scale itself all that rapidly.
Among the tech unlocked in relatively early-stage AGI is better coordination, and that helps Earth get out of unsavory race dynamics and decide to slow down.
The alignment tax at the superhuman level is pretty steep, and doing self-improvement while preserving alignment goes much slower than unrestricted self-improvement would; since at this point we have many fewer ongoing moral catastrophes (eg everyone who wants to be cryopreserved is, we’ve transitioned to excellent cheap lab-grown meat), there’s little cost to proceeding very cautiously.
This is sort of a continuous version of the first bullet point with a finite rather than infinite alignment tax.
All that said, upon reflection I think I was probably lowballing the odds of crazy stuff on the 10y timescale, and I’d go to more like 50-60% that we’re seeing mind uploads and Kardashev level 1.5-2 civilizations etc. a decade out from the first powerful AIs.
I do think it’s fair to call out the essay for not highlighting the ways in which it might be lowballing things or rolling in an assumption of deliberate slowdown; I’d rather it have given more of a nod to these considerations and made the conditions of its prediction clearer.
“It’s plausible that things could go much faster than this, but as a prediction about what will actually happen, humanity as a whole probably doesn’t want things to get incredibly crazy so fast, and so we’re likely to see something tamer.” I basically agree with that.
I feel confused about how this squares with Dario’s view that AI is “inevitable,” and “driven by powerful market forces.” Like, if humanity starts producing a technology which makes practically all aspects of life better, the idea is that this will just… stop? I’m sure some people will be scared of how fast it’s going, but it’s hard for me to see the case for the market in aggregate incentivizing less of a technology which fixes ~all problems and creates tremendous value. Maybe the idea, instead, is that governments will step in...? Which seems plausible to me, but as Ryan notes, Dario doesn’t say this.
(I work at Anthropic.) My read of the “touch grass” comment is informed a lot by the very next sentences in the essay:
which I read as saying something like “It’s plausible that things could go much faster than this, but as a prediction about what will actually happen, humanity as a whole probably doesn’t want things to get incredibly crazy so fast, and so we’re likely to see something tamer.” I basically agree with that.
FWIW, I don’t read the footnote as saying “if you think crazier stuff is possible, touch grass”—I read it as saying “if you think the stuff in this essay is ‘tame’, touch grass”. The stuff in this essay is in fact pretty wild!
That said, I think I have historically underrated questions of how fast things will go given realistic human preferences about the pace of change, and that I might well have updated more in the above direction if I’d chatted with ordinary people about what they want out of the future, so “I needed to touch grass” isn’t a terrible summary. But IMO believing “really crazy scenarios are plausible on short timescales and likely on long timescales” is basically the correct opinion, and to the extent the essay can be read as casting shade on such views it’s wrong to do so. I would have worded this bit of the essay differently.
Re: honesty and signaling, I think it’s true that this essay’s intended audience is not really the crowd that’s already gamed out Mercury disassembly timelines, and its focus is on getting people up to shock level 2 or so rather than SL4, but as far as I know everything in it is an honest reflection of what Dario believes. (I don’t claim any special insight into Dario’s opinions here, just asserting that nothing I’ve seen internally feels in tension with this essay.) Like, it isn’t going out of its way to talk about the crazy stuff, but I don’t read that omission as dishonest.
For my own part:
I think it’s likely that we’ll get nanotech, von Neumann probes, Dyson spheres, computronium planets, acausal trade, etc in the event of aligned AGI.
Whether that stuff happens within the 5-10y timeframe of the essay is much less obvious to me—I’d put it around 30-40% odds conditional on powerful AI from roughly the current paradigm, maybe?
In the other 60-70% of worlds, I think this essay does a fairly good job of describing my 80th percentile expectations (by quality-of-outcome rather than by amount-of-progress).
I would guess that I’m somewhat more Dyson-sphere-pilled than Dario.
I’d be pretty excited to see competing forecasts for what good futures might look like! I found this essay helpful for getting more concrete about my own expectations, and many of my beliefs about good futures look like “X is probably physically possible; X is probably usable-for-good by a powerful civilization; therefore probably we’ll see some X” rather than having any kind of clear narrative about how the path to that point looks.
Doesn’t this require a pretty strong and unprecedented level of international coordination on stopping an obviously immediately extremely valuable and militarily relevent technology? I think a US backed entente could impose this on the rest of the world, but that would also be an unprecedentedly large effort.
I think this is certainly possible and I hope this level of coordination happens, but I don’t exactly think this is likely in timelines this short.
More minimally, I think the vast majority of readers won’t read this essay and understand that this refers to a world in which there was a massive effort to slow down AI or in which AI was quite surprisingly slow in various ways. (Insofar as this is what Dario thinks.) Dario seems to be trying to make realistic/plausible predictions and often talks about what is “possible” (not what is possible with a desirable level of AI progress), so not mentioning this seems to greatly undermine the prediction aspect of the essay.
I agree it seems unlikely that we’ll see coordination on slowing down before one actor or coalition has a substantial enough lead over other actors that it can enforce such a slowdown unilaterally, but I think it’s reasonably likely that such a lead will arise before things get really insane.
A few different stories under which one might go from aligned “genius in a datacenter” level AI at time t to outcomes merely at the level of weirdness in this essay at t + 5-10y:
The techniques that work to align “genius in a datacenter” level AI don’t scale to wildly superhuman intelligence (eg because they lose some value fidelity from human-generated oversight signals that’s tolerable at one remove but very risky at ten). The alignment problem for serious ASI is quite hard to solve at the mildly superintelligent level, and it genuinely takes a while to work out enough that we can scale up (since the existing AIs, being aligned, won’t design unaligned successors).
If people ask their only-somewhat-superhuman AI what to do next, the AIs say “A bunch of the decisions from this point on hinge on pretty subtle philosophical questions, and frankly it doesn’t seem like you guys have figured all this out super well, have you heard of this thing called a long reflection?” That’s what I’d say if I were a million copies of me in a datacenter advising a 2024-era US government on what to do about Dyson swarms!
A leading actor uses their AI to ensure continued strategic dominance and prevent competing AI projects from posing a meaningful threat. Having done so, they just… don’t really want crazy things to happen really fast, because the actor in question is mostly composed of random politicians or whatever. (I’m personally sympathetic to astronomical waste arguments, but it’s not clear to me that people likely to end up with the levers of power here are.)
The serial iteration times and experimentation loops are just kinda slow and annoying, and mildly-superhuman AI isn’t enough to circumvent experimentation time bottlenecks (some of which end up being relatively slow), and there are stupid zoning restrictions on the land you want to use for datacenters, and some regulation adds lots of mandatory human overhead to some critical iteration loop, etc.
This isn’t a claim that maximal-intelligence-per-cubic-meter ASI initialized in one datacenter would face long delays in making efficient use of its lightcone, just that it might be tough for a not-that-much-better-than-human AGI that’s aligned and trying to respect existing regulations and so on to scale itself all that rapidly.
Among the tech unlocked in relatively early-stage AGI is better coordination, and that helps Earth get out of unsavory race dynamics and decide to slow down.
The alignment tax at the superhuman level is pretty steep, and doing self-improvement while preserving alignment goes much slower than unrestricted self-improvement would; since at this point we have many fewer ongoing moral catastrophes (eg everyone who wants to be cryopreserved is, we’ve transitioned to excellent cheap lab-grown meat), there’s little cost to proceeding very cautiously.
This is sort of a continuous version of the first bullet point with a finite rather than infinite alignment tax.
All that said, upon reflection I think I was probably lowballing the odds of crazy stuff on the 10y timescale, and I’d go to more like 50-60% that we’re seeing mind uploads and Kardashev level 1.5-2 civilizations etc. a decade out from the first powerful AIs.
I do think it’s fair to call out the essay for not highlighting the ways in which it might be lowballing things or rolling in an assumption of deliberate slowdown; I’d rather it have given more of a nod to these considerations and made the conditions of its prediction clearer.
I feel confused about how this squares with Dario’s view that AI is “inevitable,” and “driven by powerful market forces.” Like, if humanity starts producing a technology which makes practically all aspects of life better, the idea is that this will just… stop? I’m sure some people will be scared of how fast it’s going, but it’s hard for me to see the case for the market in aggregate incentivizing less of a technology which fixes ~all problems and creates tremendous value. Maybe the idea, instead, is that governments will step in...? Which seems plausible to me, but as Ryan notes, Dario doesn’t say this.