I don’t know much about nanotech/biotech, but what little I know suggests that this will be the earliest failure point where AI can cause doom for humans. I thought that because of this, I should start learning more about nanotech/biotech, and I thought that asking LessWrong for direction might be a place to start.
My heuristic for why nanotech/biotech is critical, and for why I am lumping them together:
AI doom due to misalignment is arguably about the “attack surface”; that is, if there is a high-dimensional way to have big effects on humans, then various standard arguments about the importance of alignment will apply. This seems to be the case for nanotech/biotech, in the sense that there are many kinds of germs, nanobots, chemicals, and so on that we could imagine could be made if we had the correct technological development, and these would exist in ~the same environment as people (e.g. they could get released into the atmosphere). Human civilization essentially makes the assumption that people can safely breathe the atmosphere, but that assumption could be broken by nanotech/biotech.
Nanotech/biotech is not the only thing with an exponentially high-dimensional attack surface; there’s also social networks, computer networks, probably more than that. However, nanotech/biotech seems to have the “advantage” of being small-scale; it can equillibriate in milliseconds-to-minutes, and can exist in cubic millimeters to cubic meters, which makes it much more feasible to model and collect data on than grand societal things. This suggests that you would not need all that advanced of an AI to amplify nanotech/biotech. It doesn’t even need to be a general intelligence, it just needs to come up with more powerful ways of doing nanotech/biotech. So AI-powered nanotech/biotech seems likely to arrive years if not decades before AGI. (Similar to how people see GPT-3 as the precursor to Prosaic AGI, think of AlphaFold 2 as the precursor to AI-powered biotech.)
Incidentally, this also makes it much harder to align. A common counterargument to AI x-risk is “wouldn’t a superintelligence understand that we don’t want it to do bad stuff?”, after which a common counterreply is “yeah, but we need some way of specifying that it should care about what we want”. However, an AI that doesn’t understand large-scale things such as humans and our wants might still understand small-scale stuff like nanotech/biotech. It might literally destroy humanity not because it doesn’t realize that it should care about us, but instead because it doesn’t even realize we exist.
Now, this was all a thought I came up with yesterday based on very little knowledge about nanotech/biotech, so this might be totally wrong and naive. But it seems very different from the common AI risk models, so I thought it would be strategically super important to consider if it’s true.
[Question] Will nanotech/biotech be what leads to AI doom?
I don’t know much about nanotech/biotech, but what little I know suggests that this will be the earliest failure point where AI can cause doom for humans. I thought that because of this, I should start learning more about nanotech/biotech, and I thought that asking LessWrong for direction might be a place to start.
My heuristic for why nanotech/biotech is critical, and for why I am lumping them together:
AI doom due to misalignment is arguably about the “attack surface”; that is, if there is a high-dimensional way to have big effects on humans, then various standard arguments about the importance of alignment will apply. This seems to be the case for nanotech/biotech, in the sense that there are many kinds of germs, nanobots, chemicals, and so on that we could imagine could be made if we had the correct technological development, and these would exist in ~the same environment as people (e.g. they could get released into the atmosphere). Human civilization essentially makes the assumption that people can safely breathe the atmosphere, but that assumption could be broken by nanotech/biotech.
Nanotech/biotech is not the only thing with an exponentially high-dimensional attack surface; there’s also social networks, computer networks, probably more than that. However, nanotech/biotech seems to have the “advantage” of being small-scale; it can equillibriate in milliseconds-to-minutes, and can exist in cubic millimeters to cubic meters, which makes it much more feasible to model and collect data on than grand societal things. This suggests that you would not need all that advanced of an AI to amplify nanotech/biotech. It doesn’t even need to be a general intelligence, it just needs to come up with more powerful ways of doing nanotech/biotech. So AI-powered nanotech/biotech seems likely to arrive years if not decades before AGI. (Similar to how people see GPT-3 as the precursor to Prosaic AGI, think of AlphaFold 2 as the precursor to AI-powered biotech.)
Incidentally, this also makes it much harder to align. A common counterargument to AI x-risk is “wouldn’t a superintelligence understand that we don’t want it to do bad stuff?”, after which a common counterreply is “yeah, but we need some way of specifying that it should care about what we want”. However, an AI that doesn’t understand large-scale things such as humans and our wants might still understand small-scale stuff like nanotech/biotech. It might literally destroy humanity not because it doesn’t realize that it should care about us, but instead because it doesn’t even realize we exist.
Now, this was all a thought I came up with yesterday based on very little knowledge about nanotech/biotech, so this might be totally wrong and naive. But it seems very different from the common AI risk models, so I thought it would be strategically super important to consider if it’s true.