What sort of future history would have “no significant impact” and HOW? This is like asking after the first Trinity fission weapon test what the probability by 2022 there would be “no significant impact” from nuclear weapons. It’s 0 - already the atmosphere of the earth was contaminated we just didn’t know it.
Zero is not a probability. What if Japan had surrendered before the weapons could be deployed, and the Manhattan project had never been completed? I could totally believe in a one in one hundred thousand probability that nuclear weapons just never saw proliferation, maybe more.
What if Japan had surrendered before the weapons could be deployed, and the Manhattan project had never been completed? I could totally believe in a one in one hundred thousand probability that nuclear weapons just never saw proliferation, maybe more
Specifically, I am referring to after the Trinity test. They had assembled a device and released about 25 kilotons. I am claiming for AI, the “Trinity test” already happened—llms, game playing RL agents that beat humans, and all the other 2022 AI results show that larger yields are possible. Trinity had “no direct significance” on the world in that it didn’t blow up a city, and the weapon wasn’t deployable on a missile, but it showed both were readily achievable (“make a reactor and produce plutonium or separate out u235”) and fission yield was real.
In our world, we don’t have AI better than humans at everything, and they aren’t yet affecting real world products much, but I would argue that the RL agent results show that the equivalent to “fission yield”, “superintelligence”, is possible and also readily achievable. (big neural network, big compute, big training data set = superintelligence)
After the Trinity test, if Japan had surrendered, the information had already leaked, and motivated agents—the all the world powers—would have started power seeking to make nukes. What sort of plausible history has them not doing it? A worldwide agreement that they won’t? What happens when, in a nuke free world, one country attacks another. Won’t the target start rush developing fission weapons? Won’t the other powers learn of this and join in?
It’s unstable. A world where all the powers agree not to build nukes is not a stable one, any pertubation will push it to history states closer to our real timeline.
I would argue that such a world with agreements not to build AGI is similarly not stable.
Right. Because either you believe AI has already made an impact (it has, see recsys for production use of AI that matters) or it will imminently.
The true probability when metaculus resolves as yes isn’t actually zero but the chance you get forecaster credit if you are on the wrong side of the bet IS.
I definitely agree that his confidence in the idea that AI is significant is unjustifiable, but 0 is a probability, it’s just the extreme of improbability into impossibility.
And that’s coming from me, where I do believe that AI being significant has a pretty high probability.
Right. And I am saying it is impossible, except for the classes of scenarios I mentioned, due to the fact that transformative AI is an attractor state.
There are many possible histories, and many possible algorithms humans could try, or current AI recursively self improving could try.
But the optimization arrow is always in the direction of more powerful AI, and this is recursive. Given sufficient compute it’s always the outcome.
It’s kinda like saying “the explosives on a fission bomb have detonated and the nuclear core is to design spec. What is the probability it doesn’t detonate”.
Essentially 0. It’s impossible. I will acknowledge there is actually a possibility that the physics work out where it fails to have any fission gain and stops, but it is probably so small it won’t happen in the lifespan of the observable universe.
Can you explain why it’s “unjustifiable”? What is a plausible future history, even a possible one, free of apocalypse, where humans plus existing AI systems fail to develop transformative systems by 2100.
I think that I don’t have a plausible story, and I think very high 90%+ confidence in significant impact is reasonable.
But the issue I have is roughly that probabilities of literally 100% or a little lower is unjustifiable due to the fact that we must always have some probability for (Our model is totally wrong.)
I do think very high confidence is justifiable, though.
I accept that having some remaining probability mass for “unknown unknowns” is reasonable. And you can certainly talk about ideas that didn’t work out even though they had advantages and existed 60 years ago. Jetpacks, that sort of thing.
But if you do more than a cursory analysis you will see the gain from a jetpack is you save a bit of time at the risk of your life, absurd fuel consumption, high cost, and deafening noise to your neighbors. Gain isn’t worth it.
The potential gain of better AI unlocks most of the resources of the solar system (via automated machinery that can manufacture more automated machinery) and makes world conquest feasible. It’s literally a “get the technology or lose” situation. All it takes is a belief that another power is close to having AI able to operate self replicating machinery and you either invest in the same tech or lose your entire country. Sort of how right now Google believes either they release a counter to BingGPT or lose their company.
So yeah I don’t see a justification for even 10 percent doubt.
0 is a perfectly valid probability estimate. Obviously the chance that an event observed to have not in fact happens is..ok fair. Maybe not zero. You could be mistaken about ground truth reality having happened.
So for instance if AIs take over and wipe everyone’s memories and put them in a simulation, the observed probability in 2100 is that AI didn’t do anything.
Zero is not a probability. What if Japan had surrendered before the weapons could be deployed, and the Manhattan project had never been completed? I could totally believe in a one in one hundred thousand probability that nuclear weapons just never saw proliferation, maybe more.
What if Japan had surrendered before the weapons could be deployed, and the Manhattan project had never been completed? I could totally believe in a one in one hundred thousand probability that nuclear weapons just never saw proliferation, maybe more
Specifically, I am referring to after the Trinity test. They had assembled a device and released about 25 kilotons. I am claiming for AI, the “Trinity test” already happened—llms, game playing RL agents that beat humans, and all the other 2022 AI results show that larger yields are possible. Trinity had “no direct significance” on the world in that it didn’t blow up a city, and the weapon wasn’t deployable on a missile, but it showed both were readily achievable (“make a reactor and produce plutonium or separate out u235”) and fission yield was real.
In our world, we don’t have AI better than humans at everything, and they aren’t yet affecting real world products much, but I would argue that the RL agent results show that the equivalent to “fission yield”, “superintelligence”, is possible and also readily achievable. (big neural network, big compute, big training data set = superintelligence)
After the Trinity test, if Japan had surrendered, the information had already leaked, and motivated agents—the all the world powers—would have started power seeking to make nukes. What sort of plausible history has them not doing it? A worldwide agreement that they won’t? What happens when, in a nuke free world, one country attacks another. Won’t the target start rush developing fission weapons? Won’t the other powers learn of this and join in?
It’s unstable. A world where all the powers agree not to build nukes is not a stable one, any pertubation will push it to history states closer to our real timeline.
I would argue that such a world with agreements not to build AGI is similarly not stable.
What if you just s/0/epsilon/g ?
I’d imagine Gerald’s “probability 0” is something like Metaculus’s “resolved as yes”—that is, the even in question has already happened.
Right. Because either you believe AI has already made an impact (it has, see recsys for production use of AI that matters) or it will imminently.
The true probability when metaculus resolves as yes isn’t actually zero but the chance you get forecaster credit if you are on the wrong side of the bet IS.
I definitely agree that his confidence in the idea that AI is significant is unjustifiable, but 0 is a probability, it’s just the extreme of improbability into impossibility.
And that’s coming from me, where I do believe that AI being significant has a pretty high probability.
Right. And I am saying it is impossible, except for the classes of scenarios I mentioned, due to the fact that transformative AI is an attractor state.
There are many possible histories, and many possible algorithms humans could try, or current AI recursively self improving could try.
But the optimization arrow is always in the direction of more powerful AI, and this is recursive. Given sufficient compute it’s always the outcome.
It’s kinda like saying “the explosives on a fission bomb have detonated and the nuclear core is to design spec. What is the probability it doesn’t detonate”.
Essentially 0. It’s impossible. I will acknowledge there is actually a possibility that the physics work out where it fails to have any fission gain and stops, but it is probably so small it won’t happen in the lifespan of the observable universe.
Can you explain why it’s “unjustifiable”? What is a plausible future history, even a possible one, free of apocalypse, where humans plus existing AI systems fail to develop transformative systems by 2100.
I think that I don’t have a plausible story, and I think very high 90%+ confidence in significant impact is reasonable.
But the issue I have is roughly that probabilities of literally 100% or a little lower is unjustifiable due to the fact that we must always have some probability for (Our model is totally wrong.)
I do think very high confidence is justifiable, though.
I accept that having some remaining probability mass for “unknown unknowns” is reasonable. And you can certainly talk about ideas that didn’t work out even though they had advantages and existed 60 years ago. Jetpacks, that sort of thing.
But if you do more than a cursory analysis you will see the gain from a jetpack is you save a bit of time at the risk of your life, absurd fuel consumption, high cost, and deafening noise to your neighbors. Gain isn’t worth it.
The potential gain of better AI unlocks most of the resources of the solar system (via automated machinery that can manufacture more automated machinery) and makes world conquest feasible. It’s literally a “get the technology or lose” situation. All it takes is a belief that another power is close to having AI able to operate self replicating machinery and you either invest in the same tech or lose your entire country. Sort of how right now Google believes either they release a counter to BingGPT or lose their company.
So yeah I don’t see a justification for even 10 percent doubt.
0 is a perfectly valid probability estimate. Obviously the chance that an event observed to have not in fact happens is..ok fair. Maybe not zero. You could be mistaken about ground truth reality having happened.
So for instance if AIs take over and wipe everyone’s memories and put them in a simulation, the observed probability in 2100 is that AI didn’t do anything.