There was a thread on personalized phishing scams that was going around earlier. Since AI can “reasonably” impersonate a person, you can automatically generate emails written in someone’s style and respond to messages in a way that might fool the average person. It’s a meaningful change to the threat model since attackers can self-host their own language models. More “regular cybersecurity” than “existential risk” though.
For you: https://www.lesswrong.com/posts/jZSzHLqJyNzqcGEEu/automated-computer-hacking-might-be-catastrophic
Only the first two and a half parts are finished (out of eight), but you can get the gist of it mostly from the intro.
There was a thread on personalized phishing scams that was going around earlier. Since AI can “reasonably” impersonate a person, you can automatically generate emails written in someone’s style and respond to messages in a way that might fool the average person. It’s a meaningful change to the threat model since attackers can self-host their own language models. More “regular cybersecurity” than “existential risk” though.