Finding a path in the Knowledge Graph—that’s what this part about. How works the machine that will provide our signal across the network? We have only objects activations, connections strengths, and the rule “fire together—wire together.” It seems like we need a hint.
I guessed that the pathfinding mechanism should be observable by ourselves. We can remember something, and it looks like we have some control over it. I decided that I need to know how does the thing named “I” works with memory.
And here, I remembered memory models. They don’t fit for describing what happens inside, but now we can do it! We know about Long-Term Potentiation that allows us to remember things for short and long periods. We know about Hebb’s rule. Maybe we can connect our experience with their description of the behavioral part, improve them, and create our one?
And so I decided to re-read about them. And there was one strange fact. The number of random things that people can remember at one time is not dependent on “information size”. You can simply remember “world”, “within”, “enormous”, “logging”, “dog”, “workaround”, “apple” But if I will give you: 1, 5, 30, 94, 83, 18, 12, 48, 0, 43, 134 - you’ll probably fail to remember them by reading one time. And the amount of information in that number row is much less than in word-sequence, no matter how you will count. And yes, that random was like in XKCD: Random Number, but in real experiments, it wasn’t.
We have one part of that puzzle. We organize information in the strongly coupled networks. Numbers were less coupled, there were more objects. But why does our “UI/UX” part even care about the number of objects? If we were copying information, we shouldn’t care about “objects.”
But who said that we copy it? If we build an application, we have a simple UI part, that cares only about showing us information. And we have a powerful server behind it that solves complicated tasks. What if all the solution is performing in the server part? What if “Me” only exists to choose the next task in a row? What if our UI doesn’t have enough memory to store the data?
The small amount of memory + evidence about different sizes of data retrieved + evidence about the constant amount of objects that can be extracted.
How should we build the system in those conditions?
If you are a programmer, you’ll solve this puzzle quickly.
Did you get it?
To save memory, you should have only one instance of the object. To retrieve information about that object, you’ll need a Reference. Adress in memory. And reference is much more lightweight than the object.
And that’s a surprisingly good explanation for all that weird stuff about “magic number 7″.
In one of the models I’ve seen, the memory we currently use was named Working Memory. And I think that’s a nice name.
In our case, information retrieval will look like activation of the object in our Knowledge Graph. We activate one object, it activates other objects, we receive references to them and repeat the process. It’s WORKING.
No, it’s not. The problem is that by using this algorithm, we will receive complete junk in our working memory. And it will activate wrong objects, increasing the strength of connections between them. And, as we found previously, increasing wrong connections leads us to Wonderland. We should carefully filter references before receiving something in the mail-box of our working memory. We need a spam-filter.
We are on the finish line! Our next question: “How to filter them”?
Chapter 6: How does it Work?
Finding a path in the Knowledge Graph—that’s what this part about.
How works the machine that will provide our signal across the network?
We have only objects activations, connections strengths, and the rule “fire together—wire together.” It seems like we need a hint.
I guessed that the pathfinding mechanism should be observable by ourselves. We can remember something, and it looks like we have some control over it. I decided that I need to know how does the thing named “I” works with memory.
And here, I remembered memory models. They don’t fit for describing what happens inside, but now we can do it! We know about Long-Term Potentiation that allows us to remember things for short and long periods. We know about Hebb’s rule. Maybe we can connect our experience with their description of the behavioral part, improve them, and create our one?
And so I decided to re-read about them. And there was one strange fact. The number of random things that people can remember at one time is not dependent on “information size”.
You can simply remember “world”, “within”, “enormous”, “logging”, “dog”, “workaround”, “apple”
But if I will give you: 1, 5, 30, 94, 83, 18, 12, 48, 0, 43, 134 - you’ll probably fail to remember them by reading one time. And the amount of information in that number row is much less than in word-sequence, no matter how you will count.
And yes, that random was like in XKCD: Random Number, but in real experiments, it wasn’t.
We have one part of that puzzle. We organize information in the strongly coupled networks. Numbers were less coupled, there were more objects. But why does our “UI/UX” part even care about the number of objects?
If we were copying information, we shouldn’t care about “objects.”
But who said that we copy it?
If we build an application, we have a simple UI part, that cares only about showing us information. And we have a powerful server behind it that solves complicated tasks.
What if all the solution is performing in the server part? What if “Me” only exists to choose the next task in a row?
What if our UI doesn’t have enough memory to store the data?
The small amount of memory + evidence about different sizes of data retrieved + evidence about the constant amount of objects that can be extracted.
How should we build the system in those conditions?
If you are a programmer, you’ll solve this puzzle quickly.
Did you get it?
To save memory, you should have only one instance of the object.
To retrieve information about that object, you’ll need a Reference.
Adress in memory. And reference is much more lightweight than the object.
And that’s a surprisingly good explanation for all that weird stuff about “magic number 7″.
In one of the models I’ve seen, the memory we currently use was named Working Memory. And I think that’s a nice name.
In our case, information retrieval will look like activation of the object in our Knowledge Graph. We activate one object, it activates other objects, we receive references to them and repeat the process. It’s WORKING.
No, it’s not. The problem is that by using this algorithm, we will receive complete junk in our working memory. And it will activate wrong objects, increasing the strength of connections between them. And, as we found previously, increasing wrong connections leads us to Wonderland.
We should carefully filter references before receiving something in the mail-box of our working memory. We need a spam-filter.
We are on the finish line!
Our next question: “How to filter them”?