I suppose the depends a lot on how hard anyone is trying to cause mischief, and how much easier it’s going to get to do anything of consequence. 4-chan is probably a good prototype of your typical troll “in it for the lulz”, and while they regularly go past what most would call harmless fun, there’s not a body count.
The other thing people worry about (and the news has apparently decided is the thing we all need to be afraid of this month...) is conventional bad actors using new tools to do substantially whatever they were trying to do before, but more; mostly confuse, defraud, spread propaganda, what have you. I’m kind of surprised I don’t already have an inbox full of LLM composed phishing emails… On some level it’s a threat, but it’s also not a particularly hard one to grasp, it’s getting lots of attention, and new weapons and tactics are a constant in conflicts of all types.
I’m still of the mind that directly harmful applications like the above are going to pale next to the economic disruption and social unrest that’s going to come from making large parts of the workforce redundant very quickly. Talking specific policy doesn’t look like it’s going to be in the Overton window until after AI starts replacing jobs at scale, and the “we’ll have decades to figure it out” theory hasn’t been looking good of late. And when that conversation starts it’s going to suck all the air out of the room and leave little mainstream attention for worrying about AGI.
Right, that is the silver lining. Whether it is enough to counterbalance people actively trying to set the world on fire, I am doubtful.
I suppose the depends a lot on how hard anyone is trying to cause mischief, and how much easier it’s going to get to do anything of consequence. 4-chan is probably a good prototype of your typical troll “in it for the lulz”, and while they regularly go past what most would call harmless fun, there’s not a body count.
The other thing people worry about (and the news has apparently decided is the thing we all need to be afraid of this month...) is conventional bad actors using new tools to do substantially whatever they were trying to do before, but more; mostly confuse, defraud, spread propaganda, what have you. I’m kind of surprised I don’t already have an inbox full of LLM composed phishing emails… On some level it’s a threat, but it’s also not a particularly hard one to grasp, it’s getting lots of attention, and new weapons and tactics are a constant in conflicts of all types.
I’m still of the mind that directly harmful applications like the above are going to pale next to the economic disruption and social unrest that’s going to come from making large parts of the workforce redundant very quickly. Talking specific policy doesn’t look like it’s going to be in the Overton window until after AI starts replacing jobs at scale, and the “we’ll have decades to figure it out” theory hasn’t been looking good of late. And when that conversation starts it’s going to suck all the air out of the room and leave little mainstream attention for worrying about AGI.