“Artificial Intention” doesn’t sound catchy at all to me, but that’s just my opinion.
Personally, I prefer to think of the “Alignment Problem” more generally rather than “AI Alignment”. Regardless of who has the most power (humans, AI, cyborgs, aliens, etc.) and who has superior ethics, conflict arises when participants in a system are not all aligned.
I think that’s better called simply coordination or cooperation problem. Alignment has the unfortunate implication of coming off as one party wanting to forcefully change the others. With AI it’s fine because if you’re creating a mind from scratch it’d be the height of stupidity to create an enemy.
“Artificial Intention” doesn’t sound catchy at all to me, but that’s just my opinion.
Personally, I prefer to think of the “Alignment Problem” more generally rather than “AI Alignment”. Regardless of who has the most power (humans, AI, cyborgs, aliens, etc.) and who has superior ethics, conflict arises when participants in a system are not all aligned.
I think that’s better called simply coordination or cooperation problem. Alignment has the unfortunate implication of coming off as one party wanting to forcefully change the others. With AI it’s fine because if you’re creating a mind from scratch it’d be the height of stupidity to create an enemy.