If value alignment is sufficiently harder than general intelligence, then we should expect that given a large population of strong AIs created at roughly the same time, none of them should be remotely close to Friendly.
If value alignment is sufficiently harder than general intelligence, then we should expect that given a large population of strong AIs created at roughly the same time, none of them should be remotely close to Friendly.