Now imagine if the toons in the game could actually feel what was happening to them and react believably to their environment and situation and events?
Why do you always have to ask subtly hard questions? I can just see see your smug face, smiling that smug smile of yours with that slight tilt of the head as we squirm trying to rationalize something up quick.
Here’s my crack at it: They don’t have what we currently think is the requisite code structure to “feel” in a meaningful way, but of course we are too confused to articulate the reasons much further.
Thank you, I’m flattered. I have asked Eliezer the same question, not sure if anyone will reply. I hoped that there is a simple answer to this, related to the complexity of information processing in the substrate, like the brain or a computer, but I cannot seem to find any discussions online. Probably using wrong keywords.
related to the complexity of information processing in the substrate
Not directly related. I think it has a lot to do with being roughly isomorphic to how a human thinks, which requires large complexity, but a particular complexity.
When I evaluate such questions IRL, like in the case of helping out an injured bird, or feeding my cat, I notice that my decisions seem to depend on whether I feel empathy for the thing. That is, do my algorithms recognize it as a being, or as a thing.
But then empathy can be hacked or faulty (see for example pictures of african children, cats and small animals, ugly disfigured people, far away people, etc), so I think of a sort of “abstract empathy” that is doing the job of recognizing morally valuable beings without all the bugs of my particular implementation of it.
In other words, I think it’s a matter of moral philosophy, not metaphysics.
Well, I can’t speak for the latest games, but I’ve personally read (some of) the core AI code for the toons in the first game of the series, and there was nothing in there that made a model of said code or attempted any form of what I’d even call “reasoning” throughout. No consciousness or meta-awareness.
By being simulated by the code simulating the game in which they “are”, they could to some extent be said to be “aware” of certain values like their hunger level, if you really want to stretch wide the concept of “awareness”. However, there seems to be no consciousness anywhere to be ‘aware’ (in the anthropomorphized sense) of this.
Since my priors are such that I consider it extremely unlikely that consciousness can exist without self-modeling and even more unlikely that consciousness is nonphysical, I conclude that there is a very low chance that they can be considered a “mind” with a consciousness that is aware of the pain and stimuli they receive.
The overall system is also extremely simple, in relative terms, considering the kind of AI code that’s normally discussed around these parts.
How do you know that they don’t?
Why do you always have to ask subtly hard questions? I can just see see your smug face, smiling that smug smile of yours with that slight tilt of the head as we squirm trying to rationalize something up quick.
Here’s my crack at it: They don’t have what we currently think is the requisite code structure to “feel” in a meaningful way, but of course we are too confused to articulate the reasons much further.
Thank you, I’m flattered. I have asked Eliezer the same question, not sure if anyone will reply. I hoped that there is a simple answer to this, related to the complexity of information processing in the substrate, like the brain or a computer, but I cannot seem to find any discussions online. Probably using wrong keywords.
Not directly related. I think it has a lot to do with being roughly isomorphic to how a human thinks, which requires large complexity, but a particular complexity.
When I evaluate such questions IRL, like in the case of helping out an injured bird, or feeding my cat, I notice that my decisions seem to depend on whether I feel empathy for the thing. That is, do my algorithms recognize it as a being, or as a thing.
But then empathy can be hacked or faulty (see for example pictures of african children, cats and small animals, ugly disfigured people, far away people, etc), so I think of a sort of “abstract empathy” that is doing the job of recognizing morally valuable beings without all the bugs of my particular implementation of it.
In other words, I think it’s a matter of moral philosophy, not metaphysics.
Information integration theory seems relevant.
Well, I can’t speak for the latest games, but I’ve personally read (some of) the core AI code for the toons in the first game of the series, and there was nothing in there that made a model of said code or attempted any form of what I’d even call “reasoning” throughout. No consciousness or meta-awareness.
By being simulated by the code simulating the game in which they “are”, they could to some extent be said to be “aware” of certain values like their hunger level, if you really want to stretch wide the concept of “awareness”. However, there seems to be no consciousness anywhere to be ‘aware’ (in the anthropomorphized sense) of this.
Since my priors are such that I consider it extremely unlikely that consciousness can exist without self-modeling and even more unlikely that consciousness is nonphysical, I conclude that there is a very low chance that they can be considered a “mind” with a consciousness that is aware of the pain and stimuli they receive.
The overall system is also extremely simple, in relative terms, considering the kind of AI code that’s normally discussed around these parts.