Bostrom says that machines can clearly have much better working memory than ours, which can remember a puny 4-5 chunks of information (p60). I’m not sure why this is so clear, except that it seems likely that everything can be much better for machine intelligences given the hardware advantages already mentioned, and given the much broader range of possible machine intelligences than biological ones.
To the extent that working memory is just like having a sheet of paper to one side where you can write things, we more or less already have that, though I agree it could be better integrated. To the extent that working memory involves something more complicated, like the remembered ideas being actively juggled in some fashion in our minds, I see no clear (extra) reason that machines would do a lot better. I personally don’t have a good enough understanding of why our working memories are so small to begin with—clearly we have a lot more storage capacity in some sense, which is used for other memories.
WM raises issues of computational complexity which have so far been ignored. If working memory is the set of concepts that are currently being matched against each other, then the complexity of the matching is probably n^2, If it is the set of concepts all permutations of which are being matched against variables in rules, the complexity is n!. It’s easy to imagine cognitive architectures in which the computational capacity needed to handle 9 items in WM would be orders of magnitude higher than that needed to handle 5. I suspect that’s why our WM is so limited, particularly in light of the fact that WM appears to be highly correlated with intelligence (according to Michael Vassar).
To me, this seems like an issue with the definition of ‘chunk of information’. Sure, maybe I can only remember a few at a time, but each chunk has a whole bunch of associations and connected information that I can access fairly easily. As such, my guess is that these chunks actually store a very large number of bits, and that’s why you can’t fit too many of them into short-term memory at once. Of course, you could do even better with better hardware etc., but this seems to just be an instance of the point “humans have finite memory, for every finite number there’s a bigger number, therefore we could make machines with more memory”.
To the extent that working memory is just like having a sheet of paper to one side where you can write things, we more or less already have that, though I agree it could be better integrated.
And faster, by many orders of magnitude! Modern PCs already have ram capacities on the order of tens of gigabytes. If we’re talking about simply written words (as opposed to arbitrary drawings, the storage space requirements for which are somewhat trickier), that’s not the equivalent of a page, or a book—it’s on the order of a library. And they can read or overwrite the whole thing within a few seconds.
Yes, in principle you can do the same thing by hand. In practice, writing out even one full ram dump by hand would probably take longer than a current human lifetime.
That by itself would provide a speed-type superintelligence advantage, to the extent that flat (i.e. not associative to the same extent human memory is) memory is a limitation on our intelligence.
Concretely, we could imagine a model in which the brain automatically explored all ways of composing two concepts in working memory (as a kind of automatic architectural feature), or even did something more elaborate (e.g. explored all possible subsets). In this scenario, it would be very expensive to scale up the size of working memory while retaining the same characteristics, though it wouldn’t be an in principle obstruction.
Bostrom says that machines can clearly have much better working memory than ours, which can remember a puny 4-5 chunks of information (p60). I’m not sure why this is so clear, except that it seems likely that everything can be much better for machine intelligences given the hardware advantages already mentioned, and given the much broader range of possible machine intelligences than biological ones.
To the extent that working memory is just like having a sheet of paper to one side where you can write things, we more or less already have that, though I agree it could be better integrated. To the extent that working memory involves something more complicated, like the remembered ideas being actively juggled in some fashion in our minds, I see no clear (extra) reason that machines would do a lot better. I personally don’t have a good enough understanding of why our working memories are so small to begin with—clearly we have a lot more storage capacity in some sense, which is used for other memories.
WM raises issues of computational complexity which have so far been ignored. If working memory is the set of concepts that are currently being matched against each other, then the complexity of the matching is probably n^2, If it is the set of concepts all permutations of which are being matched against variables in rules, the complexity is n!. It’s easy to imagine cognitive architectures in which the computational capacity needed to handle 9 items in WM would be orders of magnitude higher than that needed to handle 5. I suspect that’s why our WM is so limited, particularly in light of the fact that WM appears to be highly correlated with intelligence (according to Michael Vassar).
We touched on how important WM is to intelligence last time, too, and there was some dispute. I think we need to find some results in the literature.
To me, this seems like an issue with the definition of ‘chunk of information’. Sure, maybe I can only remember a few at a time, but each chunk has a whole bunch of associations and connected information that I can access fairly easily. As such, my guess is that these chunks actually store a very large number of bits, and that’s why you can’t fit too many of them into short-term memory at once. Of course, you could do even better with better hardware etc., but this seems to just be an instance of the point “humans have finite memory, for every finite number there’s a bigger number, therefore we could make machines with more memory”.
And faster, by many orders of magnitude! Modern PCs already have ram capacities on the order of tens of gigabytes. If we’re talking about simply written words (as opposed to arbitrary drawings, the storage space requirements for which are somewhat trickier), that’s not the equivalent of a page, or a book—it’s on the order of a library. And they can read or overwrite the whole thing within a few seconds.
Yes, in principle you can do the same thing by hand. In practice, writing out even one full ram dump by hand would probably take longer than a current human lifetime.
That by itself would provide a speed-type superintelligence advantage, to the extent that flat (i.e. not associative to the same extent human memory is) memory is a limitation on our intelligence.
Concretely, we could imagine a model in which the brain automatically explored all ways of composing two concepts in working memory (as a kind of automatic architectural feature), or even did something more elaborate (e.g. explored all possible subsets). In this scenario, it would be very expensive to scale up the size of working memory while retaining the same characteristics, though it wouldn’t be an in principle obstruction.