I don’t think that “entities that are formed of humans and computers (and other objects) interacting” is sufficiently specific to be considered a type of existential risk.
True, but Johnicholas still has a point about “things that look like HAL,” namely, that such scenarios presents the uFAI risk in an unconvincing manner. To most people, I suspect a scenario in which individuals and organizations gradually come to depend too much on AI would be more plausible.
True, but Johnicholas still has a point about “things that look like HAL,” namely, that such scenarios presents the uFAI risk in an unconvincing manner. To most people, I suspect a scenario in which individuals and organizations gradually come to depend too much on AI would be more plausible.