Great post! I think this captures most of the variance in consciousness discussions.
I’ve been interested in consciousness through a 23 year career in computational cognitive neuroscience. I think making progress on bridging the gap between camp 1 and camp 2 requires more detailed explanations of neural dynamics. Those can be inferred from empirical data, but not easily, so I haven’t seen any explanations similar to the one I’ve been developing in my head. I haven’t published on the topic because it’s more of a liability for a neuroscience career than an asset. Now that I’m working on AI safety, consciousness seems like a distraction. It’s tempting to write a long post about it, since this community seems substantially better at engaging with the topic than neuroscientists are; but the time cost is still substantial. If I do write such a post, I’ll cite this one in framing the issue.
One possible route to people actually caring about explanations of consciousness is in public debates on AI consciousness. People are already questioning whether LLMs might have some sort of consciousness and therefore being worthy of ethical consideration like people are. It won’t take much more person-like behavior (I think just a consistent memory) before that debate becomes really interesting to me and I think to the public at large. That wouldn’t give an LLM phenomenal consciousness much like a human, but it would give enough self-awareness to strike a lot of people as worthy of ethical consideration.
I’ve been interested in consciousness through a 23 year career in computational cognitive neuroscience. I think making progress on bridging the gap between camp 1 and camp 2 requires more detailed explanations of neural dynamics. Those can be inferred from empirical data, but not easily, so I haven’t seen any explanations similar to the one I’ve been developing in my head. I haven’t published on the topic because it’s more of a liability for a neuroscience career than an asset. Now that I’m working on AI safety, consciousness seems like a distraction. It’s tempting to write a long post about it, since this community seems substantially better at engaging with the topic than neuroscientists are; but the time cost is still substantial. If I do write such a post, I’ll cite this one in framing the issue.
If you do, and if you’re interested in exchanging ideas, feel free to reach out. I’ve been thinking about this topic for several years now and am also planning to write more about it, though that could take a while.
Great post! I think this captures most of the variance in consciousness discussions.
I’ve been interested in consciousness through a 23 year career in computational cognitive neuroscience. I think making progress on bridging the gap between camp 1 and camp 2 requires more detailed explanations of neural dynamics. Those can be inferred from empirical data, but not easily, so I haven’t seen any explanations similar to the one I’ve been developing in my head. I haven’t published on the topic because it’s more of a liability for a neuroscience career than an asset. Now that I’m working on AI safety, consciousness seems like a distraction. It’s tempting to write a long post about it, since this community seems substantially better at engaging with the topic than neuroscientists are; but the time cost is still substantial. If I do write such a post, I’ll cite this one in framing the issue.
One possible route to people actually caring about explanations of consciousness is in public debates on AI consciousness. People are already questioning whether LLMs might have some sort of consciousness and therefore being worthy of ethical consideration like people are. It won’t take much more person-like behavior (I think just a consistent memory) before that debate becomes really interesting to me and I think to the public at large. That wouldn’t give an LLM phenomenal consciousness much like a human, but it would give enough self-awareness to strike a lot of people as worthy of ethical consideration.
Thanks!
If you do, and if you’re interested in exchanging ideas, feel free to reach out. I’ve been thinking about this topic for several years now and am also planning to write more about it, though that could take a while.