For a person at a starting point of the form {AGI doesn’t pose a risk / I don’t get it}, I’d say this video+argument pushes thinking in a more robustly accurate direction than most brief-and-understandable arguments I’ve seen. Another okay brief-and-understandable argument is the analogy “humans don’t respect gorillas or ants very much, so why assume AI will respect humans?”, but I think that argument smuggles in lots of cognitive architecture assumptions that are less robustly true across possible futures, by comparison to the speed advantage argument (which seems robustly valid across most futures, and important).
It sounds like you’re advocating starting with the slow-motion camera concept, and then graduating into brainstorming AGI attack vectors and defenses until the other person becomes convinced that there’s a lot of ways to launch a conclusive humanity-ending attack and no way to stop them all.
My concern with the overall strategy is that the slow-motion camera argument may promote a way of thinking about these attacks and defenses that becomes unmoored from the speed at which physical processes can occur, and the accuracy with which they can be usefully predicted even by an AGI that’s extremely fast and intelligent. Most people do not have sufficient appreciation for just how complex the world is, how much processing power it would take to solve NP-hard problems, or how crucial the difference is between 95% right and 100% right in many cases.
If your objective is to convince people that AGI is something to take seriously as a potential threat, I think your approach would be accuracy-promoting if it moves people from “I don’t get it/no way” to “that sounds concerning—worth more research!” If it moves people to forget or ignore the possibility that AGI might be severely bottlenecked by the speed of physical processes, including the physical processes of human thought and action, then I think it would be at best neutral in its effects on people’s epistemics.
However, I do very much support and approve of the effort to find an accuracy-promoting and well-communicated way to educate and raise discussiona about these issues. My question here is about the specific execution, not the overall goal, which I think is good.
I agree that thinking critically about the way AGI can get bottlenecked by physical processes speed. While this is an important area of study and thought, I don’t see how “there could be this bottleneck though!” matters to the discussion. It’s true. There likely is this bottleneck. How big or small it is requires some thought and study, but that thought and study presupposes you already have an account for why the bottleneck operates as a real bottleneck from the perspective of a plausibly existing AGI.
For a person at a starting point of the form {AGI doesn’t pose a risk / I don’t get it}, I’d say this video+argument pushes thinking in a more robustly accurate direction than most brief-and-understandable arguments I’ve seen. Another okay brief-and-understandable argument is the analogy “humans don’t respect gorillas or ants very much, so why assume AI will respect humans?”, but I think that argument smuggles in lots of cognitive architecture assumptions that are less robustly true across possible futures, by comparison to the speed advantage argument (which seems robustly valid across most futures, and important).
It sounds like you’re advocating starting with the slow-motion camera concept, and then graduating into brainstorming AGI attack vectors and defenses until the other person becomes convinced that there’s a lot of ways to launch a conclusive humanity-ending attack and no way to stop them all.
My concern with the overall strategy is that the slow-motion camera argument may promote a way of thinking about these attacks and defenses that becomes unmoored from the speed at which physical processes can occur, and the accuracy with which they can be usefully predicted even by an AGI that’s extremely fast and intelligent. Most people do not have sufficient appreciation for just how complex the world is, how much processing power it would take to solve NP-hard problems, or how crucial the difference is between 95% right and 100% right in many cases.
If your objective is to convince people that AGI is something to take seriously as a potential threat, I think your approach would be accuracy-promoting if it moves people from “I don’t get it/no way” to “that sounds concerning—worth more research!” If it moves people to forget or ignore the possibility that AGI might be severely bottlenecked by the speed of physical processes, including the physical processes of human thought and action, then I think it would be at best neutral in its effects on people’s epistemics.
However, I do very much support and approve of the effort to find an accuracy-promoting and well-communicated way to educate and raise discussiona about these issues. My question here is about the specific execution, not the overall goal, which I think is good.
I agree that thinking critically about the way AGI can get bottlenecked by physical processes speed. While this is an important area of study and thought, I don’t see how “there could be this bottleneck though!” matters to the discussion. It’s true. There likely is this bottleneck. How big or small it is requires some thought and study, but that thought and study presupposes you already have an account for why the bottleneck operates as a real bottleneck from the perspective of a plausibly existing AGI.