I think that “long-term planning risk” and “exfiltration risk” are both really good ways to explain AI risk to policymakers. Also, “grown not built”.
They delineate pretty well some criteria for what the problem is and isn’t. Systems that can’t do that are basically not the concern here (although theoretically there might be a small chance of very strange things ending up growing in the mind-design space that cause human extinction without long-term planning or knowing how to exfiltrate).
I don’t think these are better than the fate-of-humans-vs-gorillas analogy, which is a big reason why most of us are here, but splitting the AI risk situation into easy-to-digest components, instead of logically/mathematically simple components, can go a long way (depending on how immersed the target demographic is in social reality and low-trust).
I think that “long-term planning risk” and “exfiltration risk” are both really good ways to explain AI risk to policymakers. Also, “grown not built”.
They delineate pretty well some criteria for what the problem is and isn’t. Systems that can’t do that are basically not the concern here (although theoretically there might be a small chance of very strange things ending up growing in the mind-design space that cause human extinction without long-term planning or knowing how to exfiltrate).
I don’t think these are better than the fate-of-humans-vs-gorillas analogy, which is a big reason why most of us are here, but splitting the AI risk situation into easy-to-digest components, instead of logically/mathematically simple components, can go a long way (depending on how immersed the target demographic is in social reality and low-trust).