I think what Fellous comments about “emotions” in machines is pretty good. As summarized by Browne:
If robots are to benefit from mechanisms that have a similar role to emotions it is suggested to use internal variables [Michaud et al. 01]. However, Fellous warns that an isolated emotion is simply an engineering hack, i.e. simply describing a single, isolated internal variable as an emotion could be descriptive or anthropomorphic, but not biologically inspired [04]. Instead, interrelated emotions, expressed due to resource mobilisation with context dependent computations dependent on perceived
expression is more realistic.
A consequence of this is that an artificial system must have limited resources in order to express emotions.
These emotions may appear different if expressed externally or internally, but are very related due to their underlying mechanisms.
Thus robot-emotions should be built from the
following guidelines [ibid]:
emotions are not a separate centre that computes a value on some predefined dimension
emotions should not be a result of cognitive evaluation (if state then this emotion)
emotions are not combinations of some pre-specified basic emotion (emotions are not independent from each other)
emotions should have temporal dynamics and interact with each other.
System wide control of some of the parameters (of the many ongoing, parallel processes) that determine the robot behaviour.
I think what Fellous comments about “emotions” in machines is pretty good. As summarized by Browne: