Bostrom lists a number of serious potential risks from technologies other than AI on page 231, but he apparently stops short of saying that science in general may soon reach a point where it will be too dangerous to be allowed to develop without strict controls. He considers whether AGI could be the tool that prevents these other technologies from being used catastrophically, but the unseen elephant in this room is the total surveillance state that would be required to prevent misuse of these technologies in the near future – and as long as humans remain recognizably human and there’s something left to be lost from UFAI. Is the centralized surveillance of everything, everywhere the future with the least existential risk?
Bostrom lists a number of serious potential risks from technologies other than AI on page 231, but he apparently stops short of saying that science in general may soon reach a point where it will be too dangerous to be allowed to develop without strict controls. He considers whether AGI could be the tool that prevents these other technologies from being used catastrophically, but the unseen elephant in this room is the total surveillance state that would be required to prevent misuse of these technologies in the near future – and as long as humans remain recognizably human and there’s something left to be lost from UFAI. Is the centralized surveillance of everything, everywhere the future with the least existential risk?