Phishing emails might have bad text on purpose, so that security-aware people won’t click through, because the next stage often involves speaking to a human scammer who prefers only speaking to people[1] who have no idea how to avoid scams.
(did you ever wonder why the generic phishing SMS you got was so bad? Couldn’t they proof read their one SMS? Well, sometimes they can’t, but sometimes it’s probably on purpose)
This tradeoff could change if AIs could automate the stage of “speaking to a human scammer”.
But also, if that stage isn’t automated, then I’m guessing phishing messages[1] will remain much worse than you’d expect given the attackers have access to LLMs.
Thanks Noa Weiss who worked on fraud prevention and risk mitigation at PayPal for pointing out this interesting tradeoff. Mistakes are mine
Assuming a wide spread phishing attempt that doesn’t care much who the victims are. I’m not talking about targeting a specific person, such as the CFO of a company
Phishing emails might have bad text on purpose, so that security-aware people won’t click through, because the next stage often involves speaking to a human scammer who prefers only speaking to people[1] who have no idea how to avoid scams.
(did you ever wonder why the generic phishing SMS you got was so bad? Couldn’t they proof read their one SMS? Well, sometimes they can’t, but sometimes it’s probably on purpose)
This tradeoff could change if AIs could automate the stage of “speaking to a human scammer”.
But also, if that stage isn’t automated, then I’m guessing phishing messages[1] will remain much worse than you’d expect given the attackers have access to LLMs.
Thanks Noa Weiss who worked on fraud prevention and risk mitigation at PayPal for pointing out this interesting tradeoff. Mistakes are mine
Assuming a wide spread phishing attempt that doesn’t care much who the victims are. I’m not talking about targeting a specific person, such as the CFO of a company