Around here, humans using AI to do bad things is referred to as “misuse risks”, whereas “misaligned AI” is used exclusively to refer to the AI being the primary agent. There are many thought experiments where the AI convinces humans to do things which result in bad outcomes. “Execute this plan for me, human, but don’t look at the details too hard please.” This is still considered a case of misaligned AI.
If you break it down analytically, there needs to be two elements for bad things to happen: the will to do so and the power to do so. As Daniel notes, some humans have already had the power to do so for many decades, but fortunately none have had the will. AI is expected to be extremely powerful too, and AI will have its own will (including a will to power), so both misaligned AI and misuse risks are things to take seriously.
Around here, humans using AI to do bad things is referred to as “misuse risks”, whereas “misaligned AI” is used exclusively to refer to the AI being the primary agent. There are many thought experiments where the AI convinces humans to do things which result in bad outcomes. “Execute this plan for me, human, but don’t look at the details too hard please.” This is still considered a case of misaligned AI.
If you break it down analytically, there needs to be two elements for bad things to happen: the will to do so and the power to do so. As Daniel notes, some humans have already had the power to do so for many decades, but fortunately none have had the will. AI is expected to be extremely powerful too, and AI will have its own will (including a will to power), so both misaligned AI and misuse risks are things to take seriously.
Thanks for noting the terminology, useful to have in mind.
I have a follow on comment and question in my response to Daniel that I would be interested in your response/reaction.