I have an alternative hypothesis about how consciousness evolved. I’m not especially confident in it.
In my view, a large part of the cognitive demands on hominins consists of learning skills and norms from other hominins. One of a few questions I always ask when trying to figure out why humans have a particular cognitive trait is “How could this have made it cheaper (faster, easier, more likely, etc.) to learn skills and/or norms from other hominins?” I think the core cognitive traits in question originally evolved to model the internal state of conspecifics, and make inferences about task performances, and were exapted for other purposes later.
I consider imitation learning a good candidate among cognitive abilities that hominins may have evolved since the last common ancestor with chimpanzees, since as I understand it, chimps are quite bad at imitation learning. So the first step may have been hominins obtaining the ability to see another hominin performing a skill as another hominin performing a skill, in a richer way than chimps, like “That-hominin is knapping, that-hominin is striking the core at this angle.” (Not to imply that language has emerged yet, verbal descriptions of thoughts just correspond well to the contents of those thoughts; consider this hypothesis silent on the evolution of language at the moment.) Then perhaps recursive representations about skill performance, like “This-hominin feels like this part of the task is easy, and this part is hard.” I’m not very committed on whether self-representations or other-representations came first. Then higher-order things like, “This-hominin finds it easier to learn a task when parts of the task are performed more slowly, so when this-hominin performs this task in front of that-hominin-to-be-taught, this-hominin should exaggerate this part, or that part of the task.” And then, “This-hominin-that-teaches-me is exaggerating this part of the task,” which implicitly involves representing all those lower order thoughts that lead to the other hominin choosing to exaggerate the task, and so on. This is just one example of how these sorts of cognitive traits could improve learning efficiency, in sink and source.
Once hominins encounter cooperative contexts that require norms to generate a profit, there is selection for these aforementioned general imitation learning mechanisms to be exapted for learning norms, which could result in metarepresentations of internal state relevant to norms, like emotional distress, among other things. I also think this mechanism is a large part of how evolution implements moral nativism in humans. Recursive metarepresentations of one’s own emotional distress can be informative when learning norms as well. Insofar as one’s own internal state is informative about the True Norms, evolution can constrain moral search space by providing introspective access to that internal state. On this view, this is pretty much what I think suffering is, where the internal state is physical or emotional distress.
I think this account allows for more or less conscious agents, since for every object-level representation, there can be a new metarepresentation, so as minds become richer, so does consciousness. I don’t mean to imply that full-blown episodic memory, autobiographical narrative, and so on falls right out of a scheme like this. But it also seems to predict that mostly just hominins are conscious, and maybe some other primates to a limited degree, and maybe some other animals that we’ll find have convergently evolved consciousness, maybe elephants or dolphins or magpies, but also probably not in a way that allows them to implement suffering.
I don’t feel that I need to invoke the evolution of language for any of this to occur; I find I don’t feel the need to invoke language for most explicanda in human evolution, actually. I think consciousness preceded the ability to make verbal reports about consciousness.
I also don’t mean to imply that dividing as opposed to making pies is a small fraction of the task demands that hominins faced historically, but I also don’t think it’s the largest fraction.
Your explanation does double-duty, with its assumptions at least, and kind of explains how human cooperation is stable where it wouldn’t be by default. I admit that I don’t provide an alternative explanation, but I also feel like it’s outside of the scope of the conversation and I do have alternative explanations in mind that I could shore up if pressed.
I have an alternative hypothesis about how consciousness evolved. I’m not especially confident in it.
In my view, a large part of the cognitive demands on hominins consists of learning skills and norms from other hominins. One of a few questions I always ask when trying to figure out why humans have a particular cognitive trait is “How could this have made it cheaper (faster, easier, more likely, etc.) to learn skills and/or norms from other hominins?” I think the core cognitive traits in question originally evolved to model the internal state of conspecifics, and make inferences about task performances, and were exapted for other purposes later.
I consider imitation learning a good candidate among cognitive abilities that hominins may have evolved since the last common ancestor with chimpanzees, since as I understand it, chimps are quite bad at imitation learning. So the first step may have been hominins obtaining the ability to see another hominin performing a skill as another hominin performing a skill, in a richer way than chimps, like “That-hominin is knapping, that-hominin is striking the core at this angle.” (Not to imply that language has emerged yet, verbal descriptions of thoughts just correspond well to the contents of those thoughts; consider this hypothesis silent on the evolution of language at the moment.) Then perhaps recursive representations about skill performance, like “This-hominin feels like this part of the task is easy, and this part is hard.” I’m not very committed on whether self-representations or other-representations came first. Then higher-order things like, “This-hominin finds it easier to learn a task when parts of the task are performed more slowly, so when this-hominin performs this task in front of that-hominin-to-be-taught, this-hominin should exaggerate this part, or that part of the task.” And then, “This-hominin-that-teaches-me is exaggerating this part of the task,” which implicitly involves representing all those lower order thoughts that lead to the other hominin choosing to exaggerate the task, and so on. This is just one example of how these sorts of cognitive traits could improve learning efficiency, in sink and source.
Once hominins encounter cooperative contexts that require norms to generate a profit, there is selection for these aforementioned general imitation learning mechanisms to be exapted for learning norms, which could result in metarepresentations of internal state relevant to norms, like emotional distress, among other things. I also think this mechanism is a large part of how evolution implements moral nativism in humans. Recursive metarepresentations of one’s own emotional distress can be informative when learning norms as well. Insofar as one’s own internal state is informative about the True Norms, evolution can constrain moral search space by providing introspective access to that internal state. On this view, this is pretty much what I think suffering is, where the internal state is physical or emotional distress.
I think this account allows for more or less conscious agents, since for every object-level representation, there can be a new metarepresentation, so as minds become richer, so does consciousness. I don’t mean to imply that full-blown episodic memory, autobiographical narrative, and so on falls right out of a scheme like this. But it also seems to predict that mostly just hominins are conscious, and maybe some other primates to a limited degree, and maybe some other animals that we’ll find have convergently evolved consciousness, maybe elephants or dolphins or magpies, but also probably not in a way that allows them to implement suffering.
I don’t feel that I need to invoke the evolution of language for any of this to occur; I find I don’t feel the need to invoke language for most explicanda in human evolution, actually. I think consciousness preceded the ability to make verbal reports about consciousness.
I also don’t mean to imply that dividing as opposed to making pies is a small fraction of the task demands that hominins faced historically, but I also don’t think it’s the largest fraction.
Your explanation does double-duty, with its assumptions at least, and kind of explains how human cooperation is stable where it wouldn’t be by default. I admit that I don’t provide an alternative explanation, but I also feel like it’s outside of the scope of the conversation and I do have alternative explanations in mind that I could shore up if pressed.