It’s not obvious to me that creating super smart people would have a net positive effect because motivating them to decrease AI risk is itself an alignment problem. What if they instead decide to accelerate AI progress or do nothing at all?
It’s not obvious to me that creating super smart people would have a net positive effect because motivating them to decrease AI risk is itself an alignment problem. What if they instead decide to accelerate AI progress or do nothing at all?