I’ve thought about this for a while and I think I have a simpler explanation for “old brain” “new brain” motivations in the way Hawkins puts them. Based on the fact that we can simulate people (e.g. dream characters, tulpa and hallucinations), the map/world model that Hawkins describes has the ability to also represent people and their behaviors. More specifically it seems to simulate identities or personalities. If we can simulate/represent many different people in our own one brain, then clearly the thing we are simulating doesn’t have the complexity of a whole brain. In other words, identities are far simpler than the brains they inhabit. If you think back to interacting with characters in dreams it’s clear the fidelity is relatively high; most people don’t even realize they’re dreaming and think they’re talking to real people. What I wonder is, are peoples own identities also learned simulations? If this is the case it may explain some black-boxy stuff here: If the world model has, built in, a representation of itself and the identity of the host, then it will use a simulation of the host identity to influence actions. For example, consider the host identity to be a person who learned about climate change at a young age and strongly does not want to have children. Then even though there should be an inherent motivation in the brain (notice I say the whole brain and not the “old brain”) to reproduce, this behavior is voted out because it does not make sense that the host identity would perform it. “New brain” motivations as he puts them are just identity motivations.
I’ve thought about this for a while and I think I have a simpler explanation for “old brain” “new brain” motivations in the way Hawkins puts them. Based on the fact that we can simulate people (e.g. dream characters, tulpa and hallucinations), the map/world model that Hawkins describes has the ability to also represent people and their behaviors. More specifically it seems to simulate identities or personalities. If we can simulate/represent many different people in our own one brain, then clearly the thing we are simulating doesn’t have the complexity of a whole brain. In other words, identities are far simpler than the brains they inhabit. If you think back to interacting with characters in dreams it’s clear the fidelity is relatively high; most people don’t even realize they’re dreaming and think they’re talking to real people. What I wonder is, are peoples own identities also learned simulations? If this is the case it may explain some black-boxy stuff here: If the world model has, built in, a representation of itself and the identity of the host, then it will use a simulation of the host identity to influence actions. For example, consider the host identity to be a person who learned about climate change at a young age and strongly does not want to have children. Then even though there should be an inherent motivation in the brain (notice I say the whole brain and not the “old brain”) to reproduce, this behavior is voted out because it does not make sense that the host identity would perform it. “New brain” motivations as he puts them are just identity motivations.