Computer security researcher working on evaluations for LLMs’ capability to attack hardware, software, and system users.
Personal page: https://fredheiding.com/
Google Scholar: https://scholar.google.se/citations?user=BJnWJVQAAAAJ
Computer security researcher working on evaluations for LLMs’ capability to attack hardware, software, and system users.
Personal page: https://fredheiding.com/
Google Scholar: https://scholar.google.se/citations?user=BJnWJVQAAAAJ
Thanks for your feedback! It’s just a matter of time before scammers maximize their use of AI. Hopefully, the defense community can use this time to optimize our head start. Stay tuned for our coming work!
Great discussion. I’d add that it’s context-dependent and somewhat ambiguous. It’s noteworthy that our work shows that all tested AI models conflict with at least three of the eight prohibited AI practices outlined in the EU’s AI Act.
It’s also worth noting that the only real difference between sophisticated phishing and marketing can be the intention, making mitigation difficult. Actions from AI companies to prevent phishing might restrict legitimate use cases too much to be interesting.