[Note: I was reluctant to post this because if AGI is a real threat, then downplaying that threat could destroy all value in the universe. However, I chose to support discussion of all sides on this issue.]
[WARNING—my point may be both inaccurate and harmful—WARNING!]
It’s not obvious that AGI will be dangerous on its own, especially if it becomes all-powerful in comparison to us (also not obvious that it will remain safe). I do not have high confidence in either direction, but it seems to me that those designing the AGI can and will shape that outcome if they are thoughtful.
A common view of AGI is as a potential Thanos dialed to 100% rather than 50%, Cthulhu with less charm. However, I see no reason why a fully-developed AGI would have any reason to consider people—or all planetary life—to be a threat that merits a response, even less so than we worry about being overrun by butterflies. Cautious people view AGI as stuck at Maslow’s first two levels of need (physiological and safety). If AGI is so quick and powerful that we have zero chance of shutting it down or resisting, shouldn’t it know that before we even figure out it has reached a level where it could choose to be a threat? If AGI is so risk averse that it would wipe out all life on the planet (beyond?), isn’t it also so risk averse that it would leave the planet to avoid geological and solar threats too? If it is leaving anyways (presumably much more quickly and efficiently than we could manage), why clean up before departing?
Or would AGI more likely stay on earth for self-actualization:
And AGI blessed every living thing on the planet, saying, “be fruitful, and multiply, and fill the waters in the seas, and let fowl, the beasts, and man multiply in the earth. And AGI saw that it was good.”
Down side? A far less powerful Chat GPT tells me about things that Gaius Julius Caesar built in the Third Century AD, 300 years after it says Caesar died, and about an emperor of the Holy Roman Empire looting a monastery that was actually sacked by Saracens in the year it claims Henry IV attacked that monastery. If an early AGI has occasional hallucinations and also only slightly superior intelligence, so that it knows we could overwhelm it, that could turn out badly. Other down side? Whether hooked to the outside world or used by any of a thousand movie-inspired supervillains, AGI’s power can be used as an irresistible weapon. Given human nature, we can be certain that someone wants that to happen.
[Note: I was reluctant to post this because if AGI is a real threat, then downplaying that threat could destroy all value in the universe. However, I chose to support discussion of all sides on this issue.]
[WARNING—my point may be both inaccurate and harmful—WARNING!]
It’s not obvious that AGI will be dangerous on its own, especially if it becomes all-powerful in comparison to us (also not obvious that it will remain safe). I do not have high confidence in either direction, but it seems to me that those designing the AGI can and will shape that outcome if they are thoughtful.
A common view of AGI is as a potential Thanos dialed to 100% rather than 50%, Cthulhu with less charm. However, I see no reason why a fully-developed AGI would have any reason to consider people—or all planetary life—to be a threat that merits a response, even less so than we worry about being overrun by butterflies. Cautious people view AGI as stuck at Maslow’s first two levels of need (physiological and safety). If AGI is so quick and powerful that we have zero chance of shutting it down or resisting, shouldn’t it know that before we even figure out it has reached a level where it could choose to be a threat? If AGI is so risk averse that it would wipe out all life on the planet (beyond?), isn’t it also so risk averse that it would leave the planet to avoid geological and solar threats too? If it is leaving anyways (presumably much more quickly and efficiently than we could manage), why clean up before departing?
Or would AGI more likely stay on earth for self-actualization:
And AGI blessed every living thing on the planet, saying, “be fruitful, and multiply, and fill the waters in the seas, and let fowl, the beasts, and man multiply in the earth. And AGI saw that it was good.”
Down side? A far less powerful Chat GPT tells me about things that Gaius Julius Caesar built in the Third Century AD, 300 years after it says Caesar died, and about an emperor of the Holy Roman Empire looting a monastery that was actually sacked by Saracens in the year it claims Henry IV attacked that monastery. If an early AGI has occasional hallucinations and also only slightly superior intelligence, so that it knows we could overwhelm it, that could turn out badly. Other down side? Whether hooked to the outside world or used by any of a thousand movie-inspired supervillains, AGI’s power can be used as an irresistible weapon. Given human nature, we can be certain that someone wants that to happen.