An AGI that has access to massive computing power, can self improve and can get as much information (from the internet and other sources) as it wants, could easily be a global threat.
Interestingly, hypothetical UFAI (value drift) risk is something like other existential risks in its counterintuitive impact, but more so, in that (compared to some other risks) there are many steps where you can fail, that don’t appear dangerous beforehand (because nothing like that ever happened), but that might also fail to appear dangerous after-the-fact, and therefore as properties of imagined scenarios where they’re allowed to happen. The grave implications aren’t easy to spot. Assuming soft takeoff, a prototype AGI escapes to the Internet—would that be seen as a big deal if it didn’t get enough computational power to become too disruptive? In 10 years it grown up to become a major player, and in 50 years it controls the whole future…
Even without assuming intelligence explosion or other extraordinary effects, the danger of any misstep is absolute, and yet arguments against these assumptions are taken as arguments against the risk.
Interestingly, hypothetical UFAI (value drift) risk is something like other existential risks in its counterintuitive impact, but more so, in that (compared to some other risks) there are many steps where you can fail, that don’t appear dangerous beforehand (because nothing like that ever happened), but that might also fail to appear dangerous after-the-fact, and therefore as properties of imagined scenarios where they’re allowed to happen. The grave implications aren’t easy to spot. Assuming soft takeoff, a prototype AGI escapes to the Internet—would that be seen as a big deal if it didn’t get enough computational power to become too disruptive? In 10 years it grown up to become a major player, and in 50 years it controls the whole future…
Even without assuming intelligence explosion or other extraordinary effects, the danger of any misstep is absolute, and yet arguments against these assumptions are taken as arguments against the risk.