I’m not saying LLMs necessarily raise the severity ceiling on either a bio or cyber attack. I think it’s quite possible that AIs will do so in the future, but I’m less worried about this on the 2-3 year timeframe. Instead, the main effect is decreasing the cost of these attacks and enabling more actors to execute such attacks. (as noted, it’s unclear whether this substantially worsens bio threats)
if I want to download penetration tools to hack other computers without using any LLM at all I can just do so
Yes, it’s possible to launch cyber attacks currently. But with AI assistance it will require less personal expertise and be less costly. I am slightly surprised that we have not seen a much greater amount of standard cybercrime (the bar I was thinking when I wrote this was not the hundreds of deaths bar, it was more like “statistically significant increase in cybercrime / serious deepfakes / misinformation, in a way that concretely impacts the world, compared to previous years”).
Second smaller comment:
I’m not saying LLMs necessarily raise the severity ceiling on either a bio or cyber attack. I think it’s quite possible that AIs will do so in the future, but I’m less worried about this on the 2-3 year timeframe. Instead, the main effect is decreasing the cost of these attacks and enabling more actors to execute such attacks. (as noted, it’s unclear whether this substantially worsens bio threats)
Yes, it’s possible to launch cyber attacks currently. But with AI assistance it will require less personal expertise and be less costly. I am slightly surprised that we have not seen a much greater amount of standard cybercrime (the bar I was thinking when I wrote this was not the hundreds of deaths bar, it was more like “statistically significant increase in cybercrime / serious deepfakes / misinformation, in a way that concretely impacts the world, compared to previous years”).