At the limits of technology you can just convert any form of matter into energy by dumping it into a small black hole. Small black holes are actually really hot and emit appreciable fractions of their total mass per second through hawking radiation, so if you start a small black hole by concentrating lasers in a region of space, and you then feed it matter with a particle accelerator, you have essentially a perfect matter → energy conversion. This is all to say that a superintelligence would certainly have uses for the kinds of atoms our bodies (and the earth) are made of.
I don’t think this follows. Even if there is engineering that overcomes the practical obstacles towards building and maintaining a black hole power plant, it is not clear a priori that converting a non-negligible percentage of available atoms into energy would be required or useful for whatever an AI might want to do. At some scale, generating more energy does not advance one’s goals, but only increases the waste heat emitted into space.
Obviously, things become lethal anyway (both for life and for the AI) long before anything more than an tiny fraction of the mass-energy of the surface layers of a planet has been converted by the local civilization’s industries, due exactly to the problem of waste heat. But building hardware massive enough to cause problems of this kind takes time, and causes lesser problems on the way. I don’t see why normal environmental regulations couldn’t stop such a process at that point, unless the entities doing the hardware-building are also in control of hard military power.
An unaligned superintelligence would be more efficient than humans at pursuing its goals on all levels of execution, from basic scientific work to technical planning and engineering to rallying social support for its values. It would therefore be a formidable adversary. In a world where it would be the only one of its kind, its soft power would in all likelihood be greater than that of a large nation-state (and I would argue that, in a sense, something like GPT-4 would already wield an amount of soft power rivalling many nation-states if its use were as widespread as, say, that of Google). It would not, however, be able to work miracles and its hard power could plausibly be bounded if military uses of AI remain tightly regulated and military computing systems are tightly secured (as they should be anyway, AGI or not).
Obviously, these assumptions of controllability do not hold forever (e.g. into a far future setting, where the AI controls poorly regulated off-world industries in places where no humans have any oversight). But especially in a near-term, slow-takeoff scenario, I do not find the notion compelling that the result will be immediate intelligence explosion unconstrained by the need to empirically test ideas (most ideas, in human experience, don’t work) followed by rapid extermination of humanity as the AI consumes all resources on the planet without encountering significant resistance.
If I had to think of a realistic-looking human extinction through AI scenario, I would tend to look at AI massively increasing per capita economic output, thereby generating comfortable living conditions for everyone, while quietly engineering life in a way intended to stop population explosion, but resulting in maintained below-replacement birth rates. But this class of extinction scenario does leave a lot of time for alignment and would seem to lead to continued existence of civilization.
At some scale, generating more energy does not advance one’s goals, but only increases the waste heat emitted into space.
Sure, the AI probably can’t use all the mass-energy of the solar system efficiently within the next week or something, but that just means that it’s going to want to store that mass-energy for later (saving up for the heat-death of the universe), and the configuration of atoms efficiently stored for future energy conversion doesn’t look at all like humans, with our wasteful bodies at temperatures measured in the hundreds of billions of nanoKelvins.
Obviously, things become lethal anyway (both for life and for the AI) long before anything more than an tiny fraction of the mass-energy of the surface layers of a planet has been converted by the local civilization’s industries, due exactly to the problem of waste heat. But building hardware massive enough to cause problems of this kind takes time, and causes lesser problems on the way. I don’t see why normal environmental regulations couldn’t stop such a process at that point, unless the entities doing the hardware-building are also in control of hard military power.
I think we’re imagining slightly different things by “superintelligence”, because in my mind the obvious first move of the superAI is to kill literally all humans before we ever become aware that such an entity existed, precisely to avoid even the minute chance that humanity is able to fight back in this way. The oft-quoted way around these parts that the AI can kill us all without us knowing is by figuring out which DNA sequences to send to a lab to have them synthesized into proteins, then shipped to the door of a dumb human who’s being manipulated by the AI to mix various powders together, creating either a virus much more lethal than anything we’ve ever seen, or a new species of bacteria with diamond skin, or some other thing that can be made from DNA-coded proteins. Or a variety of multiple viruses at the same time.
Sure, the AI probably can’t use all the mass-energy of the solar system efficiently within the next week or something, but that just means that it’s going to want to store that mass-energy for later (...)
If the AI can indeed engineer black-hole powered matter-to-energy converters, it will have so much fuel that the mass stored in human bodies will be a rounding error to it. Indeed, given the size of other easily accessible sources, this would seem to be the case even if it has to resort to more primitive technology and less abundant fuel as its terminal energy source, such as hydrogen-hydrogen fusion reactors. Almost irrespective of what its terminal goals are, it will have more immediate concerns than going after that rounding error. Likewise, it would in all likelihood have more pressing worries than trying to plan out its future to the heat death of the universe (because it would recognize that no such plan will survive its first billion years, anyway).
I think we’re imagining slightly different things by “superintelligence”, because in my mind the obvious first move of the superAI is to kill literally all humans (...) The oft-quoted way around these parts that the AI can kill us all without us knowing is by figuring out which DNA sequences to send to a lab to have them synthesized into proteins, (...creating...) a virus much more lethal than anything we’ve ever seen, or a new species of bacteria with diamond skin, or some other thing that can be made from DNA-coded proteins.
I am imagining by “superintelligence” an entity that is for general cognition approximately what Stockfish is for chess: globally substantially better at thinking than any human expert in any domain, although possibly with small cognitive deficiencies remaining (similar to how it is fairly easy to find chess positions that Stockfish fails to understand but that are not difficult for humans). It might be smarter than that, of course, but anything with these characteristics would qualify as an SI in my mind.
I don’t find the often-quoted diamondoid bacteria very convincing. Of course it’s just a placeholder here, but still I cannot help but note that producing diamondoid cell membranes would, especially in a unicellular organism, more likely be an adaptive disadvantage (cost, logistics of getting things into and out of the cell) than a trait that is conducive to grey-gooing all naturally evolved organisms. More generally, it seems to me that the argument from bioweapons hinges on the ability of the superintelligence to develop highly complex biological agents without significant testing. It furthermore needs to develop them in such a way, again without testing, that they are quickly and quietly lethal after spreading through all or most of the human population without detection. In my mind, that combination of properties borders on assuming the superintelligence has access to magic, at least in a world that has reasonable controls against access to biological weapons manufacturing and design capabilities in place.
When setting in motion such a murderous plan, the AI would also, on its first try, have to be extremely certain that it is not going to get caught if it is playing the long game we assume it is playing. Otherwise cooperation with humans followed by expansion beyond Earth seems like a less risky strategy for long-term survival than hoping that killing everyone will go right and hoping that there is indeed nothing left to learn for it from living organisms.
I don’t think this follows. Even if there is engineering that overcomes the practical obstacles towards building and maintaining a black hole power plant, it is not clear a priori that converting a non-negligible percentage of available atoms into energy would be required or useful for whatever an AI might want to do. At some scale, generating more energy does not advance one’s goals, but only increases the waste heat emitted into space.
Obviously, things become lethal anyway (both for life and for the AI) long before anything more than an tiny fraction of the mass-energy of the surface layers of a planet has been converted by the local civilization’s industries, due exactly to the problem of waste heat. But building hardware massive enough to cause problems of this kind takes time, and causes lesser problems on the way. I don’t see why normal environmental regulations couldn’t stop such a process at that point, unless the entities doing the hardware-building are also in control of hard military power.
An unaligned superintelligence would be more efficient than humans at pursuing its goals on all levels of execution, from basic scientific work to technical planning and engineering to rallying social support for its values. It would therefore be a formidable adversary. In a world where it would be the only one of its kind, its soft power would in all likelihood be greater than that of a large nation-state (and I would argue that, in a sense, something like GPT-4 would already wield an amount of soft power rivalling many nation-states if its use were as widespread as, say, that of Google). It would not, however, be able to work miracles and its hard power could plausibly be bounded if military uses of AI remain tightly regulated and military computing systems are tightly secured (as they should be anyway, AGI or not).
Obviously, these assumptions of controllability do not hold forever (e.g. into a far future setting, where the AI controls poorly regulated off-world industries in places where no humans have any oversight). But especially in a near-term, slow-takeoff scenario, I do not find the notion compelling that the result will be immediate intelligence explosion unconstrained by the need to empirically test ideas (most ideas, in human experience, don’t work) followed by rapid extermination of humanity as the AI consumes all resources on the planet without encountering significant resistance.
If I had to think of a realistic-looking human extinction through AI scenario, I would tend to look at AI massively increasing per capita economic output, thereby generating comfortable living conditions for everyone, while quietly engineering life in a way intended to stop population explosion, but resulting in maintained below-replacement birth rates. But this class of extinction scenario does leave a lot of time for alignment and would seem to lead to continued existence of civilization.
Sure, the AI probably can’t use all the mass-energy of the solar system efficiently within the next week or something, but that just means that it’s going to want to store that mass-energy for later (saving up for the heat-death of the universe), and the configuration of atoms efficiently stored for future energy conversion doesn’t look at all like humans, with our wasteful bodies at temperatures measured in the hundreds of billions of nanoKelvins.
I think we’re imagining slightly different things by “superintelligence”, because in my mind the obvious first move of the superAI is to kill literally all humans before we ever become aware that such an entity existed, precisely to avoid even the minute chance that humanity is able to fight back in this way. The oft-quoted way around these parts that the AI can kill us all without us knowing is by figuring out which DNA sequences to send to a lab to have them synthesized into proteins, then shipped to the door of a dumb human who’s being manipulated by the AI to mix various powders together, creating either a virus much more lethal than anything we’ve ever seen, or a new species of bacteria with diamond skin, or some other thing that can be made from DNA-coded proteins. Or a variety of multiple viruses at the same time.
If the AI can indeed engineer black-hole powered matter-to-energy converters, it will have so much fuel that the mass stored in human bodies will be a rounding error to it. Indeed, given the size of other easily accessible sources, this would seem to be the case even if it has to resort to more primitive technology and less abundant fuel as its terminal energy source, such as hydrogen-hydrogen fusion reactors. Almost irrespective of what its terminal goals are, it will have more immediate concerns than going after that rounding error. Likewise, it would in all likelihood have more pressing worries than trying to plan out its future to the heat death of the universe (because it would recognize that no such plan will survive its first billion years, anyway).
I am imagining by “superintelligence” an entity that is for general cognition approximately what Stockfish is for chess: globally substantially better at thinking than any human expert in any domain, although possibly with small cognitive deficiencies remaining (similar to how it is fairly easy to find chess positions that Stockfish fails to understand but that are not difficult for humans). It might be smarter than that, of course, but anything with these characteristics would qualify as an SI in my mind.
I don’t find the often-quoted diamondoid bacteria very convincing. Of course it’s just a placeholder here, but still I cannot help but note that producing diamondoid cell membranes would, especially in a unicellular organism, more likely be an adaptive disadvantage (cost, logistics of getting things into and out of the cell) than a trait that is conducive to grey-gooing all naturally evolved organisms. More generally, it seems to me that the argument from bioweapons hinges on the ability of the superintelligence to develop highly complex biological agents without significant testing. It furthermore needs to develop them in such a way, again without testing, that they are quickly and quietly lethal after spreading through all or most of the human population without detection. In my mind, that combination of properties borders on assuming the superintelligence has access to magic, at least in a world that has reasonable controls against access to biological weapons manufacturing and design capabilities in place.
When setting in motion such a murderous plan, the AI would also, on its first try, have to be extremely certain that it is not going to get caught if it is playing the long game we assume it is playing. Otherwise cooperation with humans followed by expansion beyond Earth seems like a less risky strategy for long-term survival than hoping that killing everyone will go right and hoping that there is indeed nothing left to learn for it from living organisms.