Blindsight was very well written but based on a premise that I think is importantly and dangerously wrong. That premise is that consciousness (in the sense of cognitive self-awareness) is not important for complex cognition.
This is the opposite of true, and a failure to recognize this is why people are predicting fantastic tool AI that doesn’t become self-aware and goal-directed.
The proof won’t fit in the margin unfortunately. To just gesture in that direction: it is possible to do complex general cognition without being able to think about one’s self and one’s cognition. It is much easier to do complex general cognition if the system is able to think about itself and its own thoughts.
Just checking if I understood your argument: is the general point that an algorithm that can think about literally everything is simpler and therefore easier to make or evolve than an algorithm that can think about literally everything except for itself and how other agents perceive it?
I’d go a bit farther and say it’s easier to develop an algorithm that can think about literally everything than one that can think about roughly half of things. That’s because the easiest general intelligence algorithms are about learning and reasoning, which apply to everything.
I don’t see where you get that. I saw no suggestion that the aliens (or vampires) in Blindsight were unaware of their own existence, or that they couldn’t think about their own interactions with the world. They didn’t lack any cognitive capacities at all. They just had no qualia, and therefore didn’t see the point of doing anything just for the experience.
There’s a gigantic difference between cognitive self-awareness and conscious experience.
I believe the Scramblers from blindsight weren’t self aware, which means they couldn’t think about their own interactions with the world.
As I recall the crew was giving one of the Scramblers a series of cognitive tests. It aced all the tests that had to do with numbers and spatial reasoning, but failed a test that required the testee to be self aware.
I guess it depends on how it’s described in context. And I have to admit it’s been a long time. I’d go reread it to see, but I don’t think I can handle any more bleakness right now...
Whenever I find my will to live becoming too strong, I read Peter Watts. —James Nicoll
it is possible to do complex general cognition without being able to think about one’s self and one’s cognition. It is much easier to do complex general cognition if the system is able to think about itself and its own thoughts.
I can see this making sense in one frame, but not in another. The frame which seems most strongly to support the ‘Blindsight’ idea is Friston’s stuff—specifically how the more successful we are at minimizing predictive error, the less conscious we are.[1]
My general intuition, in this frame, is that as intelligence increases more behaviour becomes automatic/subconscious. It seems compatible with your view that a superintelligent system would possess consciousness, but that most/all of its interactions with us would be subconscious.
Would like to hear more about this point, could update my views significantly. Happy for you to just state ‘this because that, read X, Y, Z etc’ without further elaboration—I’m not asking you to defend your position, so much as I’m looking for more to read on it.
But Watts lists a whole bunch of papers in support of the blindsight idea, contra Seth’s claim — to quote Watts:
“In fact, the nonconscious mind usually works so well on its own that it actually employs a gatekeeper in the anterious cingulate cortex to do nothing but prevent the conscious self from interfering in daily operations”
footnotes: Matsumoto, K., and K. Tanaka. 2004. Conflict and Cognitive Control. Science 303: 969-970; 113 Kerns, J.G., et al. 2004. Anterior Cingulate Conflict Monitoring and Adjustments in Control. Science 303: 1023-1026; 114 Petersen, S.E. et al. 1998. The effects of practice on the functional anatomy of task performance. Proceedings of the National Academy of Sciences 95: 853-860
“Compared to nonconscious processing, self-awareness is slow and expensive”
footnote: Matsumoto and Tanaka above
“The cost of high intelligence has even been demonstrated by experiments in which smart fruit flies lose out to dumb ones when competing for food”
footnote: Proceedings of the Royal Society of London B (DOI 10.1098/rspb.2003.2548)
“By way of comparison, consider the complex, lightning-fast calculations of savantes; those abilities are noncognitive, and there is evidence that they owe their superfunctionality not to any overarching integration of mental processes but due to relative neurological fragmentation”
footnotes: Treffert, D.A., and G.L. Wallace. 2004. Islands of genius. Scientific American 14: 14-23; Anonymous., 2004. Autism: making the connection. The Economist, 372(8387): 66
“Even if sentient and nonsentient processes were equally efficient, the conscious awareness of visceral stimuli—by its very nature— distracts the individual from other threats and opportunities in its environment”
“Chimpanzees have a higher brain-to-body ratio than orangutans, yet orangs consistently recognise themselves in mirrors while chimps do so only half the time”
footnotes: Aiello, L., and C. Dean. 1990. An introduction to human evolutionary anatomy. Academic Press, London; 123 Gallup, G.G. (Jr.). 1997. On the rise and fall of self-conception in primates. In The Self Across Psychology— self-recognition, self-awareness, and the Self Concept. Annals of the NY Acad. Sci. 818:4-17
“it turns out that the unconscious mind is better at making complex decisions than is the conscious mind”
footnote: Dijksterhuis, A., et al. 2006. Science 311:1005-1007
To be clear I’m not arguing that “look at all these sources, it must be true!” (we know that kind of argument doesn’t work). I’m hoping for somewhat more object-level counterarguments is all, or perhaps a better reason to dismiss them as being misguided (or to dismiss the picture Watts paints using them) than what Seth gestured at. I’m guessing he meant “complex general cognition” to point to something other than pure raw problem-solving performance.
Blindsight was very well written but based on a premise that I think is importantly and dangerously wrong. That premise is that consciousness (in the sense of cognitive self-awareness) is not important for complex cognition.
This is the opposite of true, and a failure to recognize this is why people are predicting fantastic tool AI that doesn’t become self-aware and goal-directed.
The proof won’t fit in the margin unfortunately. To just gesture in that direction: it is possible to do complex general cognition without being able to think about one’s self and one’s cognition. It is much easier to do complex general cognition if the system is able to think about itself and its own thoughts.
Just checking if I understood your argument: is the general point that an algorithm that can think about literally everything is simpler and therefore easier to make or evolve than an algorithm that can think about literally everything except for itself and how other agents perceive it?
Exactly.
I’d go a bit farther and say it’s easier to develop an algorithm that can think about literally everything than one that can think about roughly half of things. That’s because the easiest general intelligence algorithms are about learning and reasoning, which apply to everything.
I don’t see where you get that. I saw no suggestion that the aliens (or vampires) in Blindsight were unaware of their own existence, or that they couldn’t think about their own interactions with the world. They didn’t lack any cognitive capacities at all. They just had no qualia, and therefore didn’t see the point of doing anything just for the experience.
There’s a gigantic difference between cognitive self-awareness and conscious experience.
I believe the Scramblers from blindsight weren’t self aware, which means they couldn’t think about their own interactions with the world.
As I recall the crew was giving one of the Scramblers a series of cognitive tests. It aced all the tests that had to do with numbers and spatial reasoning, but failed a test that required the testee to be self aware.
I guess it depends on how it’s described in context. And I have to admit it’s been a long time. I’d go reread it to see, but I don’t think I can handle any more bleakness right now...
Thanks, is there anything you can point me to for further reading, whether by you or others?
I can see this making sense in one frame, but not in another. The frame which seems most strongly to support the ‘Blindsight’ idea is Friston’s stuff—specifically how the more successful we are at minimizing predictive error, the less conscious we are.[1]
My general intuition, in this frame, is that as intelligence increases more behaviour becomes automatic/subconscious. It seems compatible with your view that a superintelligent system would possess consciousness, but that most/all of its interactions with us would be subconscious.
Would like to hear more about this point, could update my views significantly. Happy for you to just state ‘this because that, read X, Y, Z etc’ without further elaboration—I’m not asking you to defend your position, so much as I’m looking for more to read on it.
This is my potentially garbled synthesis of his stuff, anyway.
I’m not sure about Friston’s stuff to be honest.
But Watts lists a whole bunch of papers in support of the blindsight idea, contra Seth’s claim — to quote Watts:
“In fact, the nonconscious mind usually works so well on its own that it actually employs a gatekeeper in the anterious cingulate cortex to do nothing but prevent the conscious self from interfering in daily operations”
footnotes: Matsumoto, K., and K. Tanaka. 2004. Conflict and Cognitive Control. Science 303: 969-970; 113 Kerns, J.G., et al. 2004. Anterior Cingulate Conflict Monitoring and Adjustments in Control. Science 303: 1023-1026; 114 Petersen, S.E. et al. 1998. The effects of practice on the functional anatomy of task performance. Proceedings of the National Academy of Sciences 95: 853-860
“Compared to nonconscious processing, self-awareness is slow and expensive”
footnote: Matsumoto and Tanaka above
“The cost of high intelligence has even been demonstrated by experiments in which smart fruit flies lose out to dumb ones when competing for food”
footnote: Proceedings of the Royal Society of London B (DOI 10.1098/rspb.2003.2548)
“By way of comparison, consider the complex, lightning-fast calculations of savantes; those abilities are noncognitive, and there is evidence that they owe their superfunctionality not to any overarching integration of mental processes but due to relative neurological fragmentation”
footnotes: Treffert, D.A., and G.L. Wallace. 2004. Islands of genius. Scientific American 14: 14-23; Anonymous., 2004. Autism: making the connection. The Economist, 372(8387): 66
“Even if sentient and nonsentient processes were equally efficient, the conscious awareness of visceral stimuli—by its very nature— distracts the individual from other threats and opportunities in its environment”
footnote: Wegner, D.M. 1994. Ironic processes of mental control. Psychol. Rev. 101: 34-52
“Chimpanzees have a higher brain-to-body ratio than orangutans, yet orangs consistently recognise themselves in mirrors while chimps do so only half the time”
footnotes: Aiello, L., and C. Dean. 1990. An introduction to human evolutionary anatomy. Academic Press, London; 123 Gallup, G.G. (Jr.). 1997. On the rise and fall of self-conception in primates. In The Self Across Psychology— self-recognition, self-awareness, and the Self Concept. Annals of the NY Acad. Sci. 818:4-17
“it turns out that the unconscious mind is better at making complex decisions than is the conscious mind”
footnote: Dijksterhuis, A., et al. 2006. Science 311:1005-1007
(I’m also reminded of DFW’s How Tracy Austin Broke My Heart.)
To be clear I’m not arguing that “look at all these sources, it must be true!” (we know that kind of argument doesn’t work). I’m hoping for somewhat more object-level counterarguments is all, or perhaps a better reason to dismiss them as being misguided (or to dismiss the picture Watts paints using them) than what Seth gestured at. I’m guessing he meant “complex general cognition” to point to something other than pure raw problem-solving performance.