Maybe something like various attempts from psychology to list emotions would help?
Overall, though, I think prevailing opinion around here is more along the lines of consciousness having many different moving parts, which means that most questions will not have simple answers.
E.g. saying I am “conscious of the taste of chocolate” tells you a lot of vague but related information about me—not only am I receiving certain signals from my taste receptors and nose and gut and peripheral glucose sensors and the texture on my tongue, but my recent past has primed me in such a way that this all gets interpreted in a particularly chocolatey way, which is then made accessible to the rest of my brain to a vague degree, but probably at least my verbal self-attention and memory formation capabilities become influenced by these interpreted perceptions, as well as priming of my perceptual systems for related stimuli, and relevant evaluations of reward/displeasure from my evaluative capabilities that will cause updates in my thought patterns for the future. All of these together make up being “conscious of the taste of chocolate,” but each one could be slightly different in me at different times, or in different people.
In other words, consciousness has no simple essence, nor is it made up of a small number of simple essences. This is especially relevant in animal consciousness—a dog is going to have many (though perhaps not all) of the macro-scale abilities I named in talking about my sensation of taste, but the lower-level details of their implementation are going to be different. “Being conscious” is not like “being more dense than water”—with density there’s only one possible dimension of variation, and there’s basically always a clear answer for which side of the line something is on, and if we’re curious about dogs we can just test a dog to see if it’s more dense than water and we’ll get a reliable result. With consciousness, all the small parts of it can vary, and which parts we care about may depend on what context we’re asking the question in! We might do better than trying to assign a binary value of “conscious or not” by assigning some degree of consciousness, but even that is still a one-dimensional simplification of a high-dimensional pattern.
P.S. Fight me, symmetry theory of valence stans. :P
Thank you for the thoughtful comments. I am not certain that the approach that I am suggesting will be successful but I am hoping that more complex experiences may be explainable from simpler essences, similar to the behaviour of fluids from simpler atomic rules. I am currently focused on the assumption that the brain is similar to a modern reinforcement learning algorithm where there is a one or more large learnt structures and a relatively simple learning algorithm. The first thing I am hoping to look at is if all the concious experiences could be explained purely by behaviours associated with the learning algorithm. Even better if in trying to do this it indicates new structures that the learning algorithm should take. For example, we have strong memories of sad events and choices we regret, this implies we rank the importance of past experiences based on these situations and weight them more heavily when learning from them. We might avoid a stategy because our intuition says it makes us sad (it is like other situations that made us sad) rather than it simply being a poor strategy to achieve our goals.
Maybe something like various attempts from psychology to list emotions would help?
Overall, though, I think prevailing opinion around here is more along the lines of consciousness having many different moving parts, which means that most questions will not have simple answers.
E.g. saying I am “conscious of the taste of chocolate” tells you a lot of vague but related information about me—not only am I receiving certain signals from my taste receptors and nose and gut and peripheral glucose sensors and the texture on my tongue, but my recent past has primed me in such a way that this all gets interpreted in a particularly chocolatey way, which is then made accessible to the rest of my brain to a vague degree, but probably at least my verbal self-attention and memory formation capabilities become influenced by these interpreted perceptions, as well as priming of my perceptual systems for related stimuli, and relevant evaluations of reward/displeasure from my evaluative capabilities that will cause updates in my thought patterns for the future. All of these together make up being “conscious of the taste of chocolate,” but each one could be slightly different in me at different times, or in different people.
In other words, consciousness has no simple essence, nor is it made up of a small number of simple essences. This is especially relevant in animal consciousness—a dog is going to have many (though perhaps not all) of the macro-scale abilities I named in talking about my sensation of taste, but the lower-level details of their implementation are going to be different. “Being conscious” is not like “being more dense than water”—with density there’s only one possible dimension of variation, and there’s basically always a clear answer for which side of the line something is on, and if we’re curious about dogs we can just test a dog to see if it’s more dense than water and we’ll get a reliable result. With consciousness, all the small parts of it can vary, and which parts we care about may depend on what context we’re asking the question in! We might do better than trying to assign a binary value of “conscious or not” by assigning some degree of consciousness, but even that is still a one-dimensional simplification of a high-dimensional pattern.
P.S. Fight me, symmetry theory of valence stans. :P
I might be biased being from a bioinformatics background but I would rather go for the OBO foundary ontology for emotions as using a Wikipedia list.
Thanks it is very handy to get something that is compatible with SUMO.
Thank you for the thoughtful comments. I am not certain that the approach that I am suggesting will be successful but I am hoping that more complex experiences may be explainable from simpler essences, similar to the behaviour of fluids from simpler atomic rules. I am currently focused on the assumption that the brain is similar to a modern reinforcement learning algorithm where there is a one or more large learnt structures and a relatively simple learning algorithm. The first thing I am hoping to look at is if all the concious experiences could be explained purely by behaviours associated with the learning algorithm. Even better if in trying to do this it indicates new structures that the learning algorithm should take. For example, we have strong memories of sad events and choices we regret, this implies we rank the importance of past experiences based on these situations and weight them more heavily when learning from them. We might avoid a stategy because our intuition says it makes us sad (it is like other situations that made us sad) rather than it simply being a poor strategy to achieve our goals.