I think there is one more level at which natural abstraction can occur: the level just “beneath” consciousness.
For example, we can create an LLM that almost perfectly matches my internal voice dialogue’s inputs and outputs. For me – internally – there would be no difference if thoughts appearing in my mind were generated by such an LLM, rather than by real biological neurons or even cortical columns. The same applies to the visual cortex and other brain regions.
Such an LLM for thoughts would be no larger than GPT-4 (as I haven’t had that many new ideas). In most cases, I can’t feel changes in individual neurons and synapses, but only the high-level output of entire brain regions.
I think we can achieve 99 percent behavioral and internal thought mimicry with this approach, but a question arises: what about qualia? However, this question isn’t any easier to answer if we choose a much lower level of abstraction.
If we learn that generating qualia requires performing some special mathematical operation F(Observations), we can add this operation to the thought-LLM’s outputs. If we have no idea what F(Observations) is, going to a deeper level of abstraction won’t reassure us that we’ve gone deep enough to capture F(O).
Here’s the text with improved grammar:
I think there is one more level at which natural abstraction can occur: the level just “beneath” consciousness.
For example, we can create an LLM that almost perfectly matches my internal voice dialogue’s inputs and outputs. For me – internally – there would be no difference if thoughts appearing in my mind were generated by such an LLM, rather than by real biological neurons or even cortical columns. The same applies to the visual cortex and other brain regions.
Such an LLM for thoughts would be no larger than GPT-4 (as I haven’t had that many new ideas). In most cases, I can’t feel changes in individual neurons and synapses, but only the high-level output of entire brain regions.
I think we can achieve 99 percent behavioral and internal thought mimicry with this approach, but a question arises: what about qualia? However, this question isn’t any easier to answer if we choose a much lower level of abstraction.
If we learn that generating qualia requires performing some special mathematical operation F(Observations), we can add this operation to the thought-LLM’s outputs. If we have no idea what F(Observations) is, going to a deeper level of abstraction won’t reassure us that we’ve gone deep enough to capture F(O).
Similar ideas here: https://medium.com/@bablulawrence/cognitive-architectures-and-llm-applications-83d6ba1c46cd