Why do you always have to ask subtly hard questions? I can just see see your smug face, smiling that smug smile of yours with that slight tilt of the head as we squirm trying to rationalize something up quick.
Here’s my crack at it: They don’t have what we currently think is the requisite code structure to “feel” in a meaningful way, but of course we are too confused to articulate the reasons much further.
Thank you, I’m flattered. I have asked Eliezer the same question, not sure if anyone will reply. I hoped that there is a simple answer to this, related to the complexity of information processing in the substrate, like the brain or a computer, but I cannot seem to find any discussions online. Probably using wrong keywords.
related to the complexity of information processing in the substrate
Not directly related. I think it has a lot to do with being roughly isomorphic to how a human thinks, which requires large complexity, but a particular complexity.
When I evaluate such questions IRL, like in the case of helping out an injured bird, or feeding my cat, I notice that my decisions seem to depend on whether I feel empathy for the thing. That is, do my algorithms recognize it as a being, or as a thing.
But then empathy can be hacked or faulty (see for example pictures of african children, cats and small animals, ugly disfigured people, far away people, etc), so I think of a sort of “abstract empathy” that is doing the job of recognizing morally valuable beings without all the bugs of my particular implementation of it.
In other words, I think it’s a matter of moral philosophy, not metaphysics.
Why do you always have to ask subtly hard questions? I can just see see your smug face, smiling that smug smile of yours with that slight tilt of the head as we squirm trying to rationalize something up quick.
Here’s my crack at it: They don’t have what we currently think is the requisite code structure to “feel” in a meaningful way, but of course we are too confused to articulate the reasons much further.
Thank you, I’m flattered. I have asked Eliezer the same question, not sure if anyone will reply. I hoped that there is a simple answer to this, related to the complexity of information processing in the substrate, like the brain or a computer, but I cannot seem to find any discussions online. Probably using wrong keywords.
Not directly related. I think it has a lot to do with being roughly isomorphic to how a human thinks, which requires large complexity, but a particular complexity.
When I evaluate such questions IRL, like in the case of helping out an injured bird, or feeding my cat, I notice that my decisions seem to depend on whether I feel empathy for the thing. That is, do my algorithms recognize it as a being, or as a thing.
But then empathy can be hacked or faulty (see for example pictures of african children, cats and small animals, ugly disfigured people, far away people, etc), so I think of a sort of “abstract empathy” that is doing the job of recognizing morally valuable beings without all the bugs of my particular implementation of it.
In other words, I think it’s a matter of moral philosophy, not metaphysics.
Information integration theory seems relevant.