Media-driven fears about AI causing major havoc that includes human extinction have as their foundation the fear that we will not solve alignment before we reach AGI. What hasn’t been sufficiently appreciated is that alignment is most fundamentally about morality.
This is where narrow AI systems trained to understand morality hold great promise. We humans may not have the intelligence to sufficiently solve alignment but by creating narrow AI systems that understand and advance morality we can solve it sooner.
Since our greatest alignment fears are about when we reach artificial super-intelligence, (ASI) perhaps narrow morality-focused ASIs should take the lead on that work. Narrow AI systems already approach top level legal and medical expertise. And because progress in those two domains is so rapid, we can expect major advances in the next few years.
We can develop a top level narrow super-intelligent AI that advances the morality at the heart of alignment. Such a system might be dubbed Narrow Artificial Moral Super-intelligence, or NAMSI.
Some developers like Stability AI understand the advantage of developing narrow AI rather than working on more ambitious, but less attainable, AGI. In fact Stability’s business model is about selling narrow AI to countries and corporations.
A question we face as a global society is to what might we best apply AI? Considering the absolute necessity of solving alignment and appreciating that morality is our central challenge here, developing NAMSI may prove our most promising application as we near AGI.
But why go for narrow artificial moral super-intelligence rather than simply artificial moral intelligence? Because this is within our grasp. While morality has great complexities that challenge humans, our success with narrow legal and medical AI tells us something. We have reason to be confident that if we train AI systems to better understand the workings of morality, we can expect that they will achieve a level of expertise in this domain that far exceeds our own. This expertise could then guide them in more effectively solving alignment than what seems currently possible through human intelligence.
NAMSI: A promising approach to alignment
Media-driven fears about AI causing major havoc that includes human extinction have as their foundation the fear that we will not solve alignment before we reach AGI. What hasn’t been sufficiently appreciated is that alignment is most fundamentally about morality.
This is where narrow AI systems trained to understand morality hold great promise. We humans may not have the intelligence to sufficiently solve alignment but by creating narrow AI systems that understand and advance morality we can solve it sooner.
Since our greatest alignment fears are about when we reach artificial super-intelligence, (ASI) perhaps narrow morality-focused ASIs should take the lead on that work. Narrow AI systems already approach top level legal and medical expertise. And because progress in those two domains is so rapid, we can expect major advances in the next few years.
We can develop a top level narrow super-intelligent AI that advances the morality at the heart of alignment. Such a system might be dubbed Narrow Artificial Moral Super-intelligence, or NAMSI.
Some developers like Stability AI understand the advantage of developing narrow AI rather than working on more ambitious, but less attainable, AGI. In fact Stability’s business model is about selling narrow AI to countries and corporations.
A question we face as a global society is to what might we best apply AI? Considering the absolute necessity of solving alignment and appreciating that morality is our central challenge here, developing NAMSI may prove our most promising application as we near AGI.
But why go for narrow artificial moral super-intelligence rather than simply artificial moral intelligence? Because this is within our grasp. While morality has great complexities that challenge humans, our success with narrow legal and medical AI tells us something. We have reason to be confident that if we train AI systems to better understand the workings of morality, we can expect that they will achieve a level of expertise in this domain that far exceeds our own. This expertise could then guide them in more effectively solving alignment than what seems currently possible through human intelligence.