A thermostat turning on the heater is not in pain, and I take this to illustrate that when we talk about pain we’re being inherently anthropocentric. I don’t care about every possible negative reinforcement signal, only those that occur along with a whole lot of human-like correlates (certain emotions, effects on memory formation, activation of concepts that humans would naturally associate with pain, maybe even the effects of certain physiological responses, etc.).
The case of AI is interesting because AIs can differ from the human mind design a lot, while still outputting legible text.
I was not thinking about a thermostat. What I had in mind was a mind design like that of a human but reduced to its essential complexity. For example, you can probably reduce the depth and width of the object recognition by dealing with a block world. You can reduce auditory processing to deal with text directly. I’m not sure to what degree you can do that with the remaining parts but I see no reason it wouldn’t work with memory. For consciousness, my guess would be that the size of the representation of the global workspace scales with the other parts. I do think that consciousness should be easily simulatable with existing hardware in such an environment. If we figure out how to wire things right.
A thermostat turning on the heater is not in pain, and I take this to illustrate that when we talk about pain we’re being inherently anthropocentric. I don’t care about every possible negative reinforcement signal, only those that occur along with a whole lot of human-like correlates (certain emotions, effects on memory formation, activation of concepts that humans would naturally associate with pain, maybe even the effects of certain physiological responses, etc.).
The case of AI is interesting because AIs can differ from the human mind design a lot, while still outputting legible text.
I was not thinking about a thermostat. What I had in mind was a mind design like that of a human but reduced to its essential complexity. For example, you can probably reduce the depth and width of the object recognition by dealing with a block world. You can reduce auditory processing to deal with text directly. I’m not sure to what degree you can do that with the remaining parts but I see no reason it wouldn’t work with memory. For consciousness, my guess would be that the size of the representation of the global workspace scales with the other parts. I do think that consciousness should be easily simulatable with existing hardware in such an environment. If we figure out how to wire things right.