Whilst it may be that Bing cannot suffer in the human sense, it doesn’t seem obvious to me that more advanced AI’s, that are still no more than neural nets, cannot suffer in a way analogous to humans. No matter what the physiological cause of human suffering, it surely has to translate into a pattern of nerve impulses around an architecture of neurons that has most likely been purposed to give rise to the unpleasant sensation of suffering. That architecture of neurons presumably arose for good evolutionary reasons. The point is that there is no reason an analogous architecture could not be created within an AI, and could then cause suffering similar to human suffering when presented with an appropriate stimulus. The open question is whether such an architecture could possibly arise incidentally, or whether it has to be hardwired in by design. We don’t know enough to answer that but my money is on the latter.
Frankly, just as its clear that Bing shows signs of intelligence (even if that intelligence is different from human), I think it is also clear that it will be able to suffer (with a kind of suffering that is different from human).
I just thought the visceral image of an animal consumed with physiological suffering was a useful image for understanding the difference.
So my personal viewpoint (and I could be proved wrong) is that Bing hasn’t the capability to suffer in any meaningful way, but is capable (though not necessarily sentiently capable) of manipulating us into thinking it is suffering.
Whilst it may be that Bing cannot suffer in the human sense, it doesn’t seem obvious to me that more advanced AI’s, that are still no more than neural nets, cannot suffer in a way analogous to humans. No matter what the physiological cause of human suffering, it surely has to translate into a pattern of nerve impulses around an architecture of neurons that has most likely been purposed to give rise to the unpleasant sensation of suffering. That architecture of neurons presumably arose for good evolutionary reasons. The point is that there is no reason an analogous architecture could not be created within an AI, and could then cause suffering similar to human suffering when presented with an appropriate stimulus. The open question is whether such an architecture could possibly arise incidentally, or whether it has to be hardwired in by design. We don’t know enough to answer that but my money is on the latter.
Frankly, just as its clear that Bing shows signs of intelligence (even if that intelligence is different from human), I think it is also clear that it will be able to suffer (with a kind of suffering that is different from human).
I just thought the visceral image of an animal consumed with physiological suffering was a useful image for understanding the difference.
So my personal viewpoint (and I could be proved wrong) is that Bing hasn’t the capability to suffer in any meaningful way, but is capable (though not necessarily sentiently capable) of manipulating us into thinking it is suffering.