Presumably people think that at some point an AI is able to suffer. So why wouldn’t a neural network be able to suffer?
If it is, does it mean that we should all artificial neural network training consider as animal experiments? Should we put something like “code welfare is also animal welfare”?
Presumably people think that at some point an AI is able to suffer. So why wouldn’t a neural network be able to suffer?
If it is, does it mean that we should all artificial neural network training consider as animal experiments? Should we put something like “code welfare is also animal welfare”?