Well, as I understand the way it works it does attach some kinda value to how ‘good’ any given board position is, then works through the tree of positions and finds the route to the ‘best’ of those positions.
Is that value an emotion?
Well no.
In a very single-dimensional way it might be a model of one though. I assume it’s probably a single real number, maybe even an integer, rather than a complex set of semantic associations.
Deep Blue is seeking just “WIN!” and labelling potential board positions accordingly whereas humans are seeking “Fun” and “Happy” and “Enough sex” and “Intellectually Interesting” and “Not scary” and god knows how many other dimensions too.
Most of those dimensions can actually be classified as “towards” or “away”, though, which will be part of the subject of the next post in the series.
The important distinction for humans, though, is that emotions are “somatic markers”—meaning that they are distinctions in the body, for purposes of organizing action responses. They aren’t arbitrary scores, but more like “action stances” of varying degrees. So yes, they’re multi-dimensional and all of the categories you mention (e.g. “intellectually interesting” and “enough sex”) qualify… but they also largely group into (and layer on top of the machinery for) the somewhat-more-fundamental operators of “toward” and “away”.
Sort of. It has hardware support for scoring the value of specific positions… which is actually an awful lot like the brain’s sorting and tagging, albeit considerably more crude.
It seems like a definitional debate over what the term “emotion” means—without actually offering any definitions.
Which is one reason I like the “somatic marker” term, when we’re talking about this—it highlights their nature as action postures in the physical body, being used as a scoring and sorting system. The fact that we call some of these markers “emotions” isn’t really all that relevant.
(Also, “somatic marker” helps to avoid some rationalists’ existing negative emotional tags on the idea of “emotions” being involved in reasoning.)
I think what Fellous comments about “emotions” in machines is pretty good. As summarized by Browne:
If robots are to benefit from mechanisms that have a similar role to emotions it is suggested to use internal variables [Michaud et al. 01]. However, Fellous warns that an isolated emotion is simply an engineering hack, i.e. simply describing a single, isolated internal variable as an emotion could be descriptive or anthropomorphic, but not biologically inspired [04]. Instead, interrelated emotions, expressed due to resource mobilisation with context dependent computations dependent on perceived
expression is more realistic.
A consequence of this is that an artificial system must have limited resources in order to express emotions.
These emotions may appear different if expressed externally or internally, but are very related due to their underlying mechanisms.
Thus robot-emotions should be built from the
following guidelines [ibid]:
emotions are not a separate centre that computes a value on some predefined dimension
emotions should not be a result of cognitive evaluation (if state then this emotion)
emotions are not combinations of some pre-specified basic emotion (emotions are not independent from each other)
emotions should have temporal dynamics and interact with each other.
System wide control of some of the parameters (of the many ongoing, parallel processes) that determine the robot behaviour.
So: deep blue has emotions?!?
It seems like a definitional debate over what the term “emotion” means—without actually offering any definitions.
Does Deep Blue have emotions?
Well, as I understand the way it works it does attach some kinda value to how ‘good’ any given board position is, then works through the tree of positions and finds the route to the ‘best’ of those positions.
Is that value an emotion?
Well no.
In a very single-dimensional way it might be a model of one though. I assume it’s probably a single real number, maybe even an integer, rather than a complex set of semantic associations.
Deep Blue is seeking just “WIN!” and labelling potential board positions accordingly whereas humans are seeking “Fun” and “Happy” and “Enough sex” and “Intellectually Interesting” and “Not scary” and god knows how many other dimensions too.
Most of those dimensions can actually be classified as “towards” or “away”, though, which will be part of the subject of the next post in the series.
The important distinction for humans, though, is that emotions are “somatic markers”—meaning that they are distinctions in the body, for purposes of organizing action responses. They aren’t arbitrary scores, but more like “action stances” of varying degrees. So yes, they’re multi-dimensional and all of the categories you mention (e.g. “intellectually interesting” and “enough sex”) qualify… but they also largely group into (and layer on top of the machinery for) the somewhat-more-fundamental operators of “toward” and “away”.
Sort of. It has hardware support for scoring the value of specific positions… which is actually an awful lot like the brain’s sorting and tagging, albeit considerably more crude.
Which is one reason I like the “somatic marker” term, when we’re talking about this—it highlights their nature as action postures in the physical body, being used as a scoring and sorting system. The fact that we call some of these markers “emotions” isn’t really all that relevant.
(Also, “somatic marker” helps to avoid some rationalists’ existing negative emotional tags on the idea of “emotions” being involved in reasoning.)
I think what Fellous comments about “emotions” in machines is pretty good. As summarized by Browne: