Very good. Objection 2 in particular resonates with my view of the situation.
One other thing that is often missed is the fact that SI assumes that development of superinteligent AI will precede other possible scenarios—including the augmented human intelligence scenario (CBI producing superhumans, with human motivations and emotions, but hugely enhanced intelligence). In my personal view, this scenario is far more likely than the creation of either friendly or unfriendly AI, and the problems related to this scenario are far more pressing.
I can try, but the issue is too complex for comments. A series of posts would be required to do it justice, so mind the relative shallowness of what follows.
I’ll focus on one thing. An artificial intelligence enhancement which adds more “spaces” to the working memory would create a human being capable of thinking far beyond any unenhanced human. This is not just a quantitative jump: we aren’t talking someone who thinks along the same lines, just faster. We are talking about a qualitative change, making connections that are literally impossible to make for anyone else.
(This is even more unclear than I thought it would be. So a tangent to, hopefully, clarify. You can hold, say, seven items in your mind while considering any subject. This vastly limits your ability to consider any complex system. In order to do so at all, you have to construct “composite items” out of many smaller items. For instance, you can think of a mathematical formula, matrix, or an operation as one “item,” which takes one space, and therefore allows you to cram “more math” into a thought than you would be able to otherwise. Alternate example: a novice chess player has to look at every piece, think about likely moves of every one, likely responses, etc. She becomes overwhelmed very quickly. An expert chess player quickly focuses on learned series of moves, known gambits and visible openings, which allows her to see several steps ahead.
One of the major failures in modern society is the illusion of understanding in complex systems. Any analysis picks out a small number of items we can keep in mind at one time, and then bases the “solutions” on them (Watts’s “Everything is Obvious” book has a great overview of this). Add more places to the working memory, and you suddenly have humans who have a qualitatively improved ability to understand complex systems. Maybe still not fully, but far better than anyone else. Sociology, psychology, neuroscience, economics… A human being with a few dozen working memory spaces would be for economy the same thing a quantum computer with eight qubits would be for cryptography—whoever develops one first, can take wreak havoc as they like.)
When this work starts in earnest (ten to twelve years from now would be my estimate), how do we control the outcomes? Will we have tightly controlled superhumans, surrounded and limited by safety mechanisms? Or will we try to find “humans we trust” to become first enhanced humans? Will we have a panic against such developments (which would then force further work to be done in secret, probably associated with military uses)?
Negative scenarios are manifold (lunatic superhumans destroying the world, or establishing tyranny; lobotomized/drugged superhumans used as weapons of war or for crowd manipulation; completely sane superhumans destroying civilization due to their still present and unmodified irrational biases; etc.). Positive scenarios are comparable to Friendly AI (unlimited scientific development, cooperation on a completely new scale, reorganization of human life and society...).
How do we avoid the negative scenarios, and increase the probability of the positive ones? Very few people seem to be talking about this (some because it still seems crazy to the average person, some explicitly because they worry about the panic/push into secrecy response).
A vaguely related anecdote: working memory was one of the things that was damaged after my stroke; for a while afterwards I was incapable of remembering more than two or three items when asked to repeat a list. I wasn’t exactly stupider than I am now, but I was something pretty similar to stupid. I couldn’t understand complex arguments, I couldn’t solve logic puzzles that required a level of indirection, I would often lose track of the topic of a sentence halfway through.
Of course, there was other brain damage as well, so it’s hard to say what causes what, and the plural of anecdote is not data. But subjectively it certainly felt like the thing that was improving as I recovered was my ability to hold things in memory… not so much number of items, as reliability of the buffers at all. I often had the thought as I recovered that if I could somehow keep improving my working memory—again, not so much “add slots” but make the whole framework more reliable—I would end up cleverer than I started out.
It would appear that all of us have very similar amounts of working memory space. It gets very complicated very fast, and there are some aspects that vary a lot. But in general, its capacity appears to be the bottleneck of fluid intelligence (and a lot of crystallized intelligence might be, in fact, learned adaptations for getting around this bottleneck).
How superior would it be? There are some strong indication that adding more “chunks” to the working space would be somewhat akin to adding more qubits to a quantum computer: if having four “chunks” (one of the most popular estimates for an average young adult) gives you 2^4 units of fluid intelligence, adding one more would increase your intelligence to 2^5 units. The implications seem clear.
I’m curious as to why this comment has been downvoted. Kalla seems to be making an essentially uncontroversial and correct summary of what many researchers think is the relevance of working memory size
(Note: it is not downvoted as I write this comment.)
First let me say that I have enjoyed kalla’s recent contributions to this site, and hope that the following won’t come across as negative. But to answer your question, I at least question both the uncontrovertiality and correctness of the summary, as well as the inference that more working memory increases abilities exponentially quickly. Kalla and I discussed some of this above and he doesn’t think that his claims hinge on specific facts about working memory, so most of this is irrelevant at this point, but might answer your question.
EDIT: Also, by correctness I mainly mean that I think our (us being cognitive scientists) understanding of this issue is much less clear than kalla’s post implies. His summary reflects my understanding of the current working theory, but I don’t think the current working theory is generally expected to be correct.
Although the exact relationship isn’t known, there’s a strong connection between IQ and working memory—apparently both in humans and animals. E.g. Matzel & Kolata 2010:
Accumulating evidence indicates that the storage and processing capabilities of the human working memory system co-vary with individuals’ performance on a wide range of cognitive tasks. The ubiquitous nature of this relationship suggests that variations in these processes may underlie individual differences in intelligence. Here we briefly review relevant data which supports this view. Furthermore, we emphasize an emerging literature describing a trait in genetically heterogeneous mice that is quantitatively and qualitatively analogous to general intelligence (g) in humans. As in humans, this animal analog of g co-varies with individual differences in both storage and processing components of the working memory system. Absent some of the complications associated with work with human subjects (e.g., phonological processing), this work with laboratory animals has provided an opportunity to assess otherwise intractable hypotheses. For instance, it has been possible in animals to manipulate individual aspects of the working memory system (e.g., selective attention), and to observe causal relationships between these variables and the expression of general cognitive abilities. This work with laboratory animals has coincided with human imaging studies (briefly reviewed here) which suggest that common brain structures (e.g., prefrontal cortex) mediate the efficacy of selective attention and the performance of individuals on intelligence test batteries. In total, this evidence suggests an evolutionary conservation of the processes that co-vary with and/or regulate “intelligence” and provides a framework for promoting these abilities in both young and old animals.
Hence, we might conclude—setting
aside the above mentioned caveats for such analyses—that [Working Memory Capacity]
and g share the largest part of their variance (72%) but are not
identical. [...] Our methodological critique notwithstanding, we believe that
Ackerman et al. (2005) are right in claiming that WMC is not the
same as g or as gf or as reasoning ability. Our argument for a
distinction between these constructs does not hinge on the size of
the correlation but on a qualitative difference: On the side of
intelligence, there is a clear factorial distinction between verbal
and numerical abilities (e.g., Su¨ß et al., 2002); on the side of
WMC, tasks with verbal contents and tasks with numerical contents
invariably load on the same factor (Kyllonen & Christal,
1990; Oberauer et al., 2000). This mismatch between WMC and
intelligence constructs not only reveals that they must not be
identified but also provides a hint as to what makes them different.
We think that verbal reasoning differs from numerical reasoning in
terms of the knowledge structures on which they are based: Verbal
reasoning involves syntax and semantic relations between natural
concepts, whereas numerical reasoning involves knowledge of
mathematical concepts. WMC, in contrast, does not rely on conceptual
structures; it is a part of the architecture that provides
cognitive functions independent of the knowledge to which they
are applied. Tasks used to measure WMC reflect this assumption
in that researchers minimize their demand on knowledge, although
they are bound to never fully succeed in that regard. Still, the
minimization works well enough to allow verbal and numerical
WM tasks to load substantially on a common factor. This suggests
that WMC tests come closer to measuring a feature of the cognitive
architecture than do intelligence tests.
Now this has me wondering if its possible to increase your own working memory via practice or some other means. I shall go do some reading on the matter.
My admittedly uninformed impression is that the state of knowledge about working memory is pretty limited, at least relative to the claims you are making. Do you think you could clarify somewhat, e.g. either show that our knowledge is not limited, or that you don’t need any precise knowledge about working memory to support your claims? In particular, I have not seen convincing evidence that working memory even exists, and it’s unclear what a “chunk” is, or how we manipulate them (perhaps manipulation costs grow exponentially with the number of chunks).
Whether “working memory” is memory at all, or whether it is a process of attentional control as applied to normal long-term memory… we don’t know for sure. So in that sense, you are totally right.
But what is the exact nature of the process is, perhaps strangely, unimportant. The question is whether the process can be enhanced, and I would say that the answer is very likely to be yes.
Also, keep in mind that working memory enhancement scenario is just one I pulled from thin air as an example. The larger point is that we are rapidly gaining the ability to non-invasively monitor activities of single neuronal cells (with fluorescent markers, for instance), and we are, more importantly, gaining the ability to control them (with finely tuned and targeted optogenetics). Thus, reading and writing into the brain is no longer an impossible hurdle, requiring nanoimplants or teeny-tiny electrodes (with requisite wiring). All you need are optical fibers and existing optogenetic tools (in theory, at least).
To generalize the point even further: we have the tools and the know-how with which we could start manipulating and enhancing existing neural networks (including those in human brains). It would be bad, inefficient and with a great deal of side-effects, we don’t really understand the underlying architecture enough to really know what we are doing—but could still theoretically begin today, if for some reason we decided to (and lost our ethics along the way). On the other hand, we don’t have a clue how to build an AGI. Regardless of any ethical or eschatonic concerns, we simply couldn’t do it even if we wanted to. My personal estimate is, therefore, that we will make it to the first goal far sooner than we make it to the second one.
You can hold, say, seven items in your mind while considering any subject. This vastly limits your ability to consider any complex system.
Really? A dubious notion in the first place, but untrue by the counterexamples of folks who go above 4 in dual N back.
You seem to have a confused fantastical notion of working memory ungrounded in neuroscientific rigor. The rough analogy I have heard is that working memory is a coarse equivalent of registers, but this doesn’t convey the enormity of the items brains hold in each working memory ‘slot’. Nonetheless, more registers does not entail superpowers.
Alternate example: a novice chess player has to look at every piece, think about likely moves of every one, likely responses, etc. She becomes overwhelmed very quickly. An expert chess player quickly focuses on learned series of moves, known gambits and visible openings, which allows her to see several steps ahead.
Chess players increase in ability over time equivalent to an exponential increase in algorithmic search performance. This increase involves hierarchical pattern learning in the cortex. Short term working memory is more involved in maintaining a stack of moves in the heuristic search algorithm humans use (register analogy).
Well, my opinion is that there already are such people, with several times the working memory. The impact of that was absolutely enormous indeed and is what brought us much of the advancements in technology and science. If you look at top physicists or mathematicians or the like - they literally can ‘cram “more math” into a thought than you would be able to otherwise’ , vastly more. It probably doesn’t help a whole lot with economics and the like though—the depth of predictions are naturally logarithmic in the computational power or knowledge of initial state, so the payoffs from getting smarter, far from the movie Limitless, are rather low, and it is still primarily a chance game.
Very good. Objection 2 in particular resonates with my view of the situation.
One other thing that is often missed is the fact that SI assumes that development of superinteligent AI will precede other possible scenarios—including the augmented human intelligence scenario (CBI producing superhumans, with human motivations and emotions, but hugely enhanced intelligence). In my personal view, this scenario is far more likely than the creation of either friendly or unfriendly AI, and the problems related to this scenario are far more pressing.
Could you expand on that?
I can try, but the issue is too complex for comments. A series of posts would be required to do it justice, so mind the relative shallowness of what follows.
I’ll focus on one thing. An artificial intelligence enhancement which adds more “spaces” to the working memory would create a human being capable of thinking far beyond any unenhanced human. This is not just a quantitative jump: we aren’t talking someone who thinks along the same lines, just faster. We are talking about a qualitative change, making connections that are literally impossible to make for anyone else.
(This is even more unclear than I thought it would be. So a tangent to, hopefully, clarify. You can hold, say, seven items in your mind while considering any subject. This vastly limits your ability to consider any complex system. In order to do so at all, you have to construct “composite items” out of many smaller items. For instance, you can think of a mathematical formula, matrix, or an operation as one “item,” which takes one space, and therefore allows you to cram “more math” into a thought than you would be able to otherwise. Alternate example: a novice chess player has to look at every piece, think about likely moves of every one, likely responses, etc. She becomes overwhelmed very quickly. An expert chess player quickly focuses on learned series of moves, known gambits and visible openings, which allows her to see several steps ahead.
One of the major failures in modern society is the illusion of understanding in complex systems. Any analysis picks out a small number of items we can keep in mind at one time, and then bases the “solutions” on them (Watts’s “Everything is Obvious” book has a great overview of this). Add more places to the working memory, and you suddenly have humans who have a qualitatively improved ability to understand complex systems. Maybe still not fully, but far better than anyone else. Sociology, psychology, neuroscience, economics… A human being with a few dozen working memory spaces would be for economy the same thing a quantum computer with eight qubits would be for cryptography—whoever develops one first, can take wreak havoc as they like.)
When this work starts in earnest (ten to twelve years from now would be my estimate), how do we control the outcomes? Will we have tightly controlled superhumans, surrounded and limited by safety mechanisms? Or will we try to find “humans we trust” to become first enhanced humans? Will we have a panic against such developments (which would then force further work to be done in secret, probably associated with military uses)?
Negative scenarios are manifold (lunatic superhumans destroying the world, or establishing tyranny; lobotomized/drugged superhumans used as weapons of war or for crowd manipulation; completely sane superhumans destroying civilization due to their still present and unmodified irrational biases; etc.). Positive scenarios are comparable to Friendly AI (unlimited scientific development, cooperation on a completely new scale, reorganization of human life and society...).
How do we avoid the negative scenarios, and increase the probability of the positive ones? Very few people seem to be talking about this (some because it still seems crazy to the average person, some explicitly because they worry about the panic/push into secrecy response).
I like this series of thoughts, but I wonder about just how superior a human with 2 or 3 times the working memory would be.
Currently, do all humans have the same amount of working memory? If not, how “superior” are those with more working memory ?
A vaguely related anecdote: working memory was one of the things that was damaged after my stroke; for a while afterwards I was incapable of remembering more than two or three items when asked to repeat a list. I wasn’t exactly stupider than I am now, but I was something pretty similar to stupid. I couldn’t understand complex arguments, I couldn’t solve logic puzzles that required a level of indirection, I would often lose track of the topic of a sentence halfway through.
Of course, there was other brain damage as well, so it’s hard to say what causes what, and the plural of anecdote is not data. But subjectively it certainly felt like the thing that was improving as I recovered was my ability to hold things in memory… not so much number of items, as reliability of the buffers at all. I often had the thought as I recovered that if I could somehow keep improving my working memory—again, not so much “add slots” but make the whole framework more reliable—I would end up cleverer than I started out.
Take it for what it’s worth.
It would appear that all of us have very similar amounts of working memory space. It gets very complicated very fast, and there are some aspects that vary a lot. But in general, its capacity appears to be the bottleneck of fluid intelligence (and a lot of crystallized intelligence might be, in fact, learned adaptations for getting around this bottleneck).
How superior would it be? There are some strong indication that adding more “chunks” to the working space would be somewhat akin to adding more qubits to a quantum computer: if having four “chunks” (one of the most popular estimates for an average young adult) gives you 2^4 units of fluid intelligence, adding one more would increase your intelligence to 2^5 units. The implications seem clear.
I’m curious as to why this comment has been downvoted. Kalla seems to be making an essentially uncontroversial and correct summary of what many researchers think is the relevance of working memory size
(Note: it is not downvoted as I write this comment.)
First let me say that I have enjoyed kalla’s recent contributions to this site, and hope that the following won’t come across as negative. But to answer your question, I at least question both the uncontrovertiality and correctness of the summary, as well as the inference that more working memory increases abilities exponentially quickly. Kalla and I discussed some of this above and he doesn’t think that his claims hinge on specific facts about working memory, so most of this is irrelevant at this point, but might answer your question.
EDIT: Also, by correctness I mainly mean that I think our (us being cognitive scientists) understanding of this issue is much less clear than kalla’s post implies. His summary reflects my understanding of the current working theory, but I don’t think the current working theory is generally expected to be correct.
Although the exact relationship isn’t known, there’s a strong connection between IQ and working memory—apparently both in humans and animals. E.g. Matzel & Kolata 2010:
or Oberauer et al. 2005:
Now this has me wondering if its possible to increase your own working memory via practice or some other means. I shall go do some reading on the matter.
Thanks for the links!
My admittedly uninformed impression is that the state of knowledge about working memory is pretty limited, at least relative to the claims you are making. Do you think you could clarify somewhat, e.g. either show that our knowledge is not limited, or that you don’t need any precise knowledge about working memory to support your claims? In particular, I have not seen convincing evidence that working memory even exists, and it’s unclear what a “chunk” is, or how we manipulate them (perhaps manipulation costs grow exponentially with the number of chunks).
Whether “working memory” is memory at all, or whether it is a process of attentional control as applied to normal long-term memory… we don’t know for sure. So in that sense, you are totally right.
But what is the exact nature of the process is, perhaps strangely, unimportant. The question is whether the process can be enhanced, and I would say that the answer is very likely to be yes.
Also, keep in mind that working memory enhancement scenario is just one I pulled from thin air as an example. The larger point is that we are rapidly gaining the ability to non-invasively monitor activities of single neuronal cells (with fluorescent markers, for instance), and we are, more importantly, gaining the ability to control them (with finely tuned and targeted optogenetics). Thus, reading and writing into the brain is no longer an impossible hurdle, requiring nanoimplants or teeny-tiny electrodes (with requisite wiring). All you need are optical fibers and existing optogenetic tools (in theory, at least).
To generalize the point even further: we have the tools and the know-how with which we could start manipulating and enhancing existing neural networks (including those in human brains). It would be bad, inefficient and with a great deal of side-effects, we don’t really understand the underlying architecture enough to really know what we are doing—but could still theoretically begin today, if for some reason we decided to (and lost our ethics along the way). On the other hand, we don’t have a clue how to build an AGI. Regardless of any ethical or eschatonic concerns, we simply couldn’t do it even if we wanted to. My personal estimate is, therefore, that we will make it to the first goal far sooner than we make it to the second one.
Really? A dubious notion in the first place, but untrue by the counterexamples of folks who go above 4 in dual N back.
You seem to have a confused fantastical notion of working memory ungrounded in neuroscientific rigor. The rough analogy I have heard is that working memory is a coarse equivalent of registers, but this doesn’t convey the enormity of the items brains hold in each working memory ‘slot’. Nonetheless, more registers does not entail superpowers.
Chess players increase in ability over time equivalent to an exponential increase in algorithmic search performance. This increase involves hierarchical pattern learning in the cortex. Short term working memory is more involved in maintaining a stack of moves in the heuristic search algorithm humans use (register analogy).
Well, my opinion is that there already are such people, with several times the working memory. The impact of that was absolutely enormous indeed and is what brought us much of the advancements in technology and science. If you look at top physicists or mathematicians or the like - they literally can ‘cram “more math” into a thought than you would be able to otherwise’ , vastly more. It probably doesn’t help a whole lot with economics and the like though—the depth of predictions are naturally logarithmic in the computational power or knowledge of initial state, so the payoffs from getting smarter, far from the movie Limitless, are rather low, and it is still primarily a chance game.