As you note, humans aren’t human-friendly intelligences, or we wouldn’t have internal existential risk.
It’s possible that particular humans might approximate human friendly intelligences.
Assuming it’s not impossible, how would you know? What constitutes a human-friendly intelligence, in other than negative terms?
It’s possible that particular humans might approximate human friendly intelligences.
Assuming it’s not impossible, how would you know? What constitutes a human-friendly intelligence, in other than negative terms?