If neural networks can suffer and this can be made precise, this means that NNs of minimum size can be constructed that are capable of suffering or suffer to a maximum degree per size. The opposite of hedonium. We might call it sufferonium—the word already sounds horrible. A bad idea, but we have to watch out for not being blackmailed by it. Unscrupulous agents could put NNs capable of suffering in a device and then do crazy things like ‘buy our ink or the printer suffers.’
The same applies to consciousness if you can create a smallest NN that is conscious.
If neural networks can suffer and this can be made precise, this means that NNs of minimum size can be constructed that are capable of suffering or suffer to a maximum degree per size. The opposite of hedonium. We might call it sufferonium—the word already sounds horrible. A bad idea, but we have to watch out for not being blackmailed by it. Unscrupulous agents could put NNs capable of suffering in a device and then do crazy things like ‘buy our ink or the printer suffers.’
The same applies to consciousness if you can create a smallest NN that is conscious.
We don’t see this kind of blackmail in the current world, where it’s near-trivial to make NNs (using real biological neurons) that clearly can suffer.
I agree. I would go even further and say it shows that the concept of suffering is not well-defined.
I see suffering as strongly driven by the social interaction of individuals. Consider: Suffering appears only in social animals capable of care.