We can reasonably say that something has a “thinking life” if it functions as a state machine where ‘states’ correspond to abstract models of sensory data (patterns in external stimuli). The complexity of the possible mental states is correlated with the complexity (information content) of the sensory data that can be collected and incorporated into models.
A cat’s brain can be reasonably interpreted as working this way. A nematode worm’s 302 neurons probably can’t. A plant’s root system almost definitely can’t.
Note that this concept of a “thinking life” or sentience is a much weaker and more inclusive than the concept of “personhood” or sapience.
We can reasonably say that something has a “thinking life” if it functions as a state machine where ‘states’ correspond to abstract models of sensory data (patterns in external stimuli). The complexity of the possible mental states is correlated with the complexity (information content) of the sensory data that can be collected and incorporated into models.
A cat’s brain can be reasonably interpreted as working this way. A nematode worm’s 302 neurons probably can’t. A plant’s root system almost definitely can’t.
Note that this concept of a “thinking life” or sentience is a much weaker and more inclusive than the concept of “personhood” or sapience.