If so, I think he’s wrong here. The book may lead them to realize that unaligned AGI doesn’t actually constitute an improvement in capabilities. It’s the creation of a new enemy. A bridge that might fall down is not a useful bridge and a successful military power, informed of that, wouldn’t want to build it.
It’s in no party’s interests to create AGI that isn’t aligned with at least the people overseeing the research project.
An AGI aligned with a few living humans is generally going to lead to better outcomes than an AGI aligned with nobody at all, there is enough shared, to know that, and no one coherently extrapolated is as crass or parochial as the people we are now. Alignment theory should be promoted with every party.
If so, I think he’s wrong here. The book may lead them to realize that unaligned AGI doesn’t actually constitute an improvement in capabilities. It’s the creation of a new enemy. A bridge that might fall down is not a useful bridge and a successful military power, informed of that, wouldn’t want to build it.
It’s in no party’s interests to create AGI that isn’t aligned with at least the people overseeing the research project.
An AGI aligned with a few living humans is generally going to lead to better outcomes than an AGI aligned with nobody at all, there is enough shared, to know that, and no one coherently extrapolated is as crass or parochial as the people we are now. Alignment theory should be promoted with every party.