Strong upvote, but I disagree on something important. There’s an underlying generator that chooses between simulacra do a weighted average over in its response. The notion that you can “speak” to that generator is a type error, perhaps akin to thinking that you can speak to the country ‘France’ by calling its elected president.
My current model says that the human brain also works by taking the weighted (and normalised!) average (the linear combination) over several population vectors (modules) and using the resultant vector to stream a response. There are definite experiments showing that something like this is the case for vision and motor commands, and strong reasons to suspect that this is how semantic processing works as well.
“Don’t leave your car there‒you mate get a ticket.” (blend of “may” and “might”)
Or spoonerisms, which is when you say “Hoobert Heever” instead of “Herbert Hoover”. “Hoobert” is what you get when you take “Herbert” and mix a bit of whatever vectors are missing from “Hoover” in there. Just as neurology has been one of the primary sources of insight into how the brain works, so too our linguistic pathologies give us insights into what the brain is trying to do with speech all the time.
Note that every “module” or “simulacra” can include inhibitory connections (negative weights). Thus, if some input simultaneously activates modules for “serious” and “flippant”, they will inhibit each other (perhaps exactly cancel each other out) and the result will not look like it’s both half serious and half flippant. In other cases, you have modules that are more or less compatible and don’t inhibit each other, e.g. “flippant”+”kind”.
Anyway, my point is that if there are generalities involved in the process of weighting various simulacra, then it’s likely that every input gets fed through whatever part of the network is responsible processing those. And that central generator is likely to be extremely competent at what it’s doing, it’s just hard for humans to tell because it’s not doing anything human.
Strong upvote, but I disagree on something important. There’s an underlying generator that chooses between simulacra do a weighted average over in its response. The notion that you can “speak” to that generator is a type error, perhaps akin to thinking that you can speak to the country ‘France’ by calling its elected president.
My current model says that the human brain also works by taking the weighted (and normalised!) average (the linear combination) over several population vectors (modules) and using the resultant vector to stream a response. There are definite experiments showing that something like this is the case for vision and motor commands, and strong reasons to suspect that this is how semantic processing works as well.
Consider for example what happens when you produce what Hofstadter calls “wordblends”[1]:
Or spoonerisms, which is when you say “Hoobert Heever” instead of “Herbert Hoover”. “Hoobert” is what you get when you take “Herbert” and mix a bit of whatever vectors are missing from “Hoover” in there. Just as neurology has been one of the primary sources of insight into how the brain works, so too our linguistic pathologies give us insights into what the brain is trying to do with speech all the time.
Note that every “module” or “simulacra” can include inhibitory connections (negative weights). Thus, if some input simultaneously activates modules for “serious” and “flippant”, they will inhibit each other (perhaps exactly cancel each other out) and the result will not look like it’s both half serious and half flippant. In other cases, you have modules that are more or less compatible and don’t inhibit each other, e.g. “flippant”+”kind”.
Anyway, my point is that if there are generalities involved in the process of weighting various simulacra, then it’s likely that every input gets fed through whatever part of the network is responsible processing those. And that central generator is likely to be extremely competent at what it’s doing, it’s just hard for humans to tell because it’s not doing anything human.
Comfortably among the best lectures of all time.