There is an issue of definition here. Categories of scenario exist where it is unclear if they constitute an “AI takeover” even though there is recognition of a real and likely risk of some type. Almost everyone stakes out positions at binary extremes of outcome, good or bad, without much consideration for plausible quasi-equilibrium states in the middle that fall out of some risk models. For researchers working in the latter camp, it will feel a bit like a false dichotomy.
As another heuristic, the inability to arrive at a common set of elementary computational assumptions, grounded in physics, whence the AI risk models are derived is sufficient reason to be skeptical of any particular AI risk model without knowing much else.
There is an issue of definition here. Categories of scenario exist where it is unclear if they constitute an “AI takeover” even though there is recognition of a real and likely risk of some type. Almost everyone stakes out positions at binary extremes of outcome, good or bad, without much consideration for plausible quasi-equilibrium states in the middle that fall out of some risk models. For researchers working in the latter camp, it will feel a bit like a false dichotomy.
As another heuristic, the inability to arrive at a common set of elementary computational assumptions, grounded in physics, whence the AI risk models are derived is sufficient reason to be skeptical of any particular AI risk model without knowing much else.