However, this image is obviously optimized to be scary and disgusting. It looks dangerous, with long rows of sharp teeth. It is an eldritch horror. It’s at this point that I’d like to point out the simple, obvious fact that “we don’t actually know how these models work, and we definitely don’t know that they’re creepy and dangerous on the inside.”
It’s optimized to illustrate the point that the neural network isn’t trained to actually care about what the person training it thinks it came to care about, it’s only optimized to act that way on the training distribution. Unless I’m missing something, arguing the image is wrong would be equivalent to arguing that maybe the model truly cares about what its human trainers want it to care about. (Which we know isn’t actually the case.)
It’s optimized to illustrate the point that the neural network isn’t trained to actually care about what the person training it thinks it came to care about, it’s only optimized to act that way on the training distribution. Unless I’m missing something, arguing the image is wrong would be equivalent to arguing that maybe the model truly cares about what its human trainers want it to care about. (Which we know isn’t actually the case.)