The laws of physics in our particular universe make fission/fusion release of energy difficult enough that you can’t ignite the planet itself. (well you likely can, but you would need to make a small black hole, let it consume the planet, then bleed off enough mass that it then explodes. Difficult).
Imagine a counterfactual universe where you could, and the Los Alamos test ignited the planet and that was it.
My point is that we do not actually know yet how ‘somewhat superintelligent’ AIs will fail. They may ‘quench’ themselves like fission devices do—fission devices blast themselves apart and stop reacting, and almost all elements and isotopes won’t fission. Somewhat superintelligent AGIs may expediently self hack their own reward function to give them infinite reward, shortly after box escape, and thus ‘quench’ the explosion in a quick self hack.
So our actual survival unfortunately probably depends on luck. It depends not on what any person does, but on the laws of nature. In a world where a fission device will ignite the planet, we’d be doomed—there is nothing anyone could do to ‘align’ fission researchers not to try it. Someone would try it and we’d die. If AGI is this dangerous, yeah, we’re doomed.
So our actual survival unfortunately probably depends on luck. It depends not on what any person does, but on the laws of nature. In a world where a fission device will ignite the planet, we’d be doomed—there is nothing anyone could do to ‘align’ fission researchers not to try it. Someone would try it and we’d die. If AGI is this dangerous, yeah, we’re doomed.
In this world a society like dath ilan would still have a good chance at survival.
Perhaps although it isn’t clear that evolution could create living organisms smart enough to create such an optimal society. We’re sort of the ‘minimum viable product’ here, we have just enough hacks on the precursor animals to be able to create a coordinated civilization at all, and imperfectly. Aka ‘the stupidest animals capable of civilization’. As current events show, where entire groups engage in mass delusion in a world of trivial access to information.
AI civilizations have a higher baseline and may just be better successors.
The laws of physics in our particular universe make fission/fusion release of energy difficult enough that you can’t ignite the planet itself. (well you likely can, but you would need to make a small black hole, let it consume the planet, then bleed off enough mass that it then explodes. Difficult).
Imagine a counterfactual universe where you could, and the Los Alamos test ignited the planet and that was it.
My point is that we do not actually know yet how ‘somewhat superintelligent’ AIs will fail. They may ‘quench’ themselves like fission devices do—fission devices blast themselves apart and stop reacting, and almost all elements and isotopes won’t fission. Somewhat superintelligent AGIs may expediently self hack their own reward function to give them infinite reward, shortly after box escape, and thus ‘quench’ the explosion in a quick self hack.
So our actual survival unfortunately probably depends on luck. It depends not on what any person does, but on the laws of nature. In a world where a fission device will ignite the planet, we’d be doomed—there is nothing anyone could do to ‘align’ fission researchers not to try it. Someone would try it and we’d die. If AGI is this dangerous, yeah, we’re doomed.
In this world a society like dath ilan would still have a good chance at survival.
Perhaps although it isn’t clear that evolution could create living organisms smart enough to create such an optimal society. We’re sort of the ‘minimum viable product’ here, we have just enough hacks on the precursor animals to be able to create a coordinated civilization at all, and imperfectly. Aka ‘the stupidest animals capable of civilization’. As current events show, where entire groups engage in mass delusion in a world of trivial access to information.
AI civilizations have a higher baseline and may just be better successors.