If we’re being realistic, this kind of thing would only get criminalized after something bad actually happened. Until then, too many people will think “omg, it’s just a Chatbot”. Any politician calling for it would get made fun of on every Late Night show.
I’m almost certain this is already criminal, to the extent it’s actually dangerous. If you roll a boulder down the hill, you’re up for manslaughter if it kills someone, and reckless endangerment if it could’ve but didn’t hurt anyone. It doesn’t matter if it’s a boulder or software; if you should’ve known it was dangerous, you’re criminally liable.
In this particular case, I have mixed feelings. This demonstration is likely to do immense good for public awareness of AGI risk. It even did for me, on an emotional level I haven’t felt before. But it’s also impossible to know when a dumb bot will come up with a really clever idea by accident, or when improvements have produced emergent intelligence. So we need to shut this down as much as possible as get to better capabilities. Of course, criminal punishments reduce bad behavior, but don’t eliminate it. So we also need to be able to detect and prevent malicious bot behavior, and keep up with prevention techniques (likely with aligned, better AGI from bigger corporations) as it gets more capable.
I wonder if/when/how quickly this will be criminalized in a manner similar to terrorism or using weapons of mass destruction.
If we’re being realistic, this kind of thing would only get criminalized after something bad actually happened. Until then, too many people will think “omg, it’s just a Chatbot”. Any politician calling for it would get made fun of on every Late Night show.
I’m almost certain this is already criminal, to the extent it’s actually dangerous. If you roll a boulder down the hill, you’re up for manslaughter if it kills someone, and reckless endangerment if it could’ve but didn’t hurt anyone. It doesn’t matter if it’s a boulder or software; if you should’ve known it was dangerous, you’re criminally liable.
In this particular case, I have mixed feelings. This demonstration is likely to do immense good for public awareness of AGI risk. It even did for me, on an emotional level I haven’t felt before. But it’s also impossible to know when a dumb bot will come up with a really clever idea by accident, or when improvements have produced emergent intelligence. So we need to shut this down as much as possible as get to better capabilities. Of course, criminal punishments reduce bad behavior, but don’t eliminate it. So we also need to be able to detect and prevent malicious bot behavior, and keep up with prevention techniques (likely with aligned, better AGI from bigger corporations) as it gets more capable.