I guess the threat model relies on the overhang. If you need x compute for powerful ai, then you need to control more than all the compute on earth minus x to ensure safety, or something like that. Controlling the people probably much easier.
Yes, where killing all humans is an example of “controlling the people”, from the perspective of an Unfriendly AI.
I guess the threat model relies on the overhang. If you need x compute for powerful ai, then you need to control more than all the compute on earth minus x to ensure safety, or something like that. Controlling the people probably much easier.
Yes, where killing all humans is an example of “controlling the people”, from the perspective of an Unfriendly AI.