I do not think that “101 is a prime number” and “I am currently on Earth” are implemented that differently in my brain; they both seem to be implemented in parameters rather than architecture. I guess they also wouldn’t be implemented differently in modern-day LLMs. Maybe the relevant extension to LLMs would be the facts the model would think of when prompted with the empty string vs. some other detailed prompt.
How can “I am currently on Earth” be encoded directly into the structure of the brain? I also feel that “101 is a prime number” is more fundamental to me (being about logical structure rather than physical structure) than currently being on Earth, so I’m having a hard time understanding why this is not considered a hinge belief.
I do not think that “101 is a prime number” and “I am currently on Earth” are implemented that differently in my brain; they both seem to be implemented in parameters rather than architecture. I guess they also wouldn’t be implemented differently in modern-day LLMs. Maybe the relevant extension to LLMs would be the facts the model would think of when prompted with the empty string vs. some other detailed prompt.
The proposition “I am currently on Earth” is implemented both in the parameters and in the architecture, independently.
How can “I am currently on Earth” be encoded directly into the structure of the brain? I also feel that “101 is a prime number” is more fundamental to me (being about logical structure rather than physical structure) than currently being on Earth, so I’m having a hard time understanding why this is not considered a hinge belief.