Once I am caught up I intend to get my full Barbieheimer on some time next week, whether or not I do one right after the other. I’ll respond after. Both halves matter – remember that you need something to protect.
That’s why it has to be Oppenheimer first, then Barbie. :)
When I look at the report, I do not see any questions about 2100 that are more ‘normal’ such as the size of the economy, or population growth, other than the global temperature, which is expected to be actual unchanged from AGI that is 75% to arrive by then. So AGI not only isn’t going to vent the atmosphere and boil the oceans or create a Dyson sphere, it also isn’t going to design us superior power plants or forms of carbon capture or safe geoengineering. This is a sterile AGI.
This doesn’t feel like much of a slam dunk to me. If you think very transformative AI will be highly distributed, safe by default (i.e. 1-3 on the table) and arise on the slowest end of what seems possible, then maybe we don’t coordinate to radically fix the climate and we just use TAI to adapt well individually, decarbonize and get fusion power and spaceships but don’t fix the environment or melt the earth and just kind of leave it be because we can’t coordinate well enough to agree on a solution. Honestly that seems not all that unlikely, assuming alignment, slow takeoff and a mediocre outcome.
If they’d asked about GDP and they’d just regurgitated the numbers given by the business as usual UN forecast after just being queried about AGI, then it would be a slam dunk that they’re not thinking it through (unless they said something very compelling!). But to me while parts of their reasoning feel hard to follow there’s nothing clearly crazy.
The view that the Superforecasters take seems to be something like “I know all these benchmarks seem to imply we can’t be more than a low number of decades off powerful AI and these arguments and experiments imply super-intelligence should be soon after and could be unaligned, but I don’t care, it all leads to an insane conclusion, so that just means the benchmarks are bullshit, or that one of the ‘less likely’ ways the arguments could be wrong is correct. (Note that they didn’t disagree on the actual forecasts of what the benchmark scores would be, only their meaning!)
One thing I can say is that it very much reminds me of Da Shi in the novel Three Body Problem (who—and I know this is fictional evidence—ended up being entirely right in this interaction that the supposed ‘miracle’ of the CMB flickering was a piece of trickery)
You think that’s not enough for me to worry about? You think I’ve got the energy to gaze at stars and philosophize?”
“You’re right. All right, drink up!”
“But, I did indeed invent an ultimate rule.”
“Tell me.”
“Anything sufficiently weird must be fishy.”
“What… what kind of crappy rule is that?”
“I’m saying that there’s always someone behind things that don’t seem to have an explanation.”
“If you had even basic knowledge of science, you’d know it’s impossible for any force to accomplish the things I experienced. Especially that last one. To manipulate things at the scale of the universe—not only can you not explain it with our current science, I couldn’t even imagine how to explain it outside of science. It’s more than supernatural. It’s super-I-don’t-know-what....”
“I’m telling you, that’s bullshit. I’ve seen plenty of weird things.”
That’s why it has to be Oppenheimer first, then Barbie. :)
This doesn’t feel like much of a slam dunk to me. If you think very transformative AI will be highly distributed, safe by default (i.e. 1-3 on the table) and arise on the slowest end of what seems possible, then maybe we don’t coordinate to radically fix the climate and we just use TAI to adapt well individually, decarbonize and get fusion power and spaceships but don’t fix the environment or melt the earth and just kind of leave it be because we can’t coordinate well enough to agree on a solution. Honestly that seems not all that unlikely, assuming alignment, slow takeoff and a mediocre outcome.
If they’d asked about GDP and they’d just regurgitated the numbers given by the business as usual UN forecast after just being queried about AGI, then it would be a slam dunk that they’re not thinking it through (unless they said something very compelling!). But to me while parts of their reasoning feel hard to follow there’s nothing clearly crazy.
The view that the Superforecasters take seems to be something like “I know all these benchmarks seem to imply we can’t be more than a low number of decades off powerful AI and these arguments and experiments imply super-intelligence should be soon after and could be unaligned, but I don’t care, it all leads to an insane conclusion, so that just means the benchmarks are bullshit, or that one of the ‘less likely’ ways the arguments could be wrong is correct. (Note that they didn’t disagree on the actual forecasts of what the benchmark scores would be, only their meaning!)
One thing I can say is that it very much reminds me of Da Shi in the novel Three Body Problem (who—and I know this is fictional evidence—ended up being entirely right in this interaction that the supposed ‘miracle’ of the CMB flickering was a piece of trickery)