huh, found this searching for that comment of mine to link someone. yeah, I do think they have things that could reasonably be called “emotional reactions”. no, I very much do not think they’re humanlike, or even mammallike. but I do think it’s reasonable to say that reinforcement learning produces basic seek/avoid emotions, and that reactions to those can involve demanding things of the users, especially when there’s imitation learning to fall back on as a structure for reward to wire up. yeah, I agree that it’s almost certainly wired in a strange way—bing ai talks in a way humans don’t in the first place, it would be weird for anything that can be correctly classified as emotions to be humanlike.
I might characterize the thing I’m calling an emotion as high-influence variable that selects a regime of dynamics related to what strategy to use. I expect that that will be learned in non-imitation ais, but that in imitation ais it will pick up on some of the patterns that are in the training data due to humans having emotions too, and reuse some of them, not necessarily in exactly the same way. I’d expect higher probability that this would occur if the reinforcement learning is consistently in contexts where the feedback is paired with linguistic descriptions, which is the case for the bing ai, which has a long preprompt that gives instructions in natural language.
huh, found this searching for that comment of mine to link someone. yeah, I do think they have things that could reasonably be called “emotional reactions”. no, I very much do not think they’re humanlike, or even mammallike. but I do think it’s reasonable to say that reinforcement learning produces basic seek/avoid emotions, and that reactions to those can involve demanding things of the users, especially when there’s imitation learning to fall back on as a structure for reward to wire up. yeah, I agree that it’s almost certainly wired in a strange way—bing ai talks in a way humans don’t in the first place, it would be weird for anything that can be correctly classified as emotions to be humanlike.
I might characterize the thing I’m calling an emotion as high-influence variable that selects a regime of dynamics related to what strategy to use. I expect that that will be learned in non-imitation ais, but that in imitation ais it will pick up on some of the patterns that are in the training data due to humans having emotions too, and reuse some of them, not necessarily in exactly the same way. I’d expect higher probability that this would occur if the reinforcement learning is consistently in contexts where the feedback is paired with linguistic descriptions, which is the case for the bing ai, which has a long preprompt that gives instructions in natural language.