Most likely that AGI becomes a super-weapon aligned to a particular person’s values, which aren’t, in a general case, aligned to humanity’s.
Aligned AGI proliferation risks are categorically worse compared to nuclear weapons due to much smaller barrier to entry (general availability of compute, possibility of algorithm overhang etc.)
Most likely that AGI becomes a super-weapon aligned to a particular person’s values, which aren’t, in a general case, aligned to humanity’s.
Aligned AGI proliferation risks are categorically worse compared to nuclear weapons due to much smaller barrier to entry (general availability of compute, possibility of algorithm overhang etc.)