This question of thresholds for ‘comprehension’—to use the judiciously applied scare quotes Katja used abut comprehension (I’ll have more to say about that in coming posts, as many contributors in here doubtless will) – i.e. thresholds for discernment of features of reality, particularly abstract features of “reality” be it across species (existent ones and future ones included, biological and nonbiological included) is one I, too, have thought about seriously and in several guises over the years.
First though, about the scare quotes. Comprehension vs discovery is worth distinguishing. When I was a math major, back in the day ( I was a double major at UCB, math and philosophy, and wrote my honors thesis on the mind body problem), I, like most math majors, frequently experienced the distinction between grasping in a full intuitive sense, some concept or theorem, and technically understanding that it was true, by step-wise going through a proof, seeing the validity of each step, and thus accepting the conclusion.
But what I was always after … and lacking this I never was satisfied with myself that I had really understood the concept, even though I accepted the demonstration of its truth… was the “ah-ha” moment of seeing that it was “conceptually necessary”, as I used to think of it to myself. If fact, I wouldn’t quit trying to intuit the thing, until I finally achieved this full understanding.
It’s well known in math that frequently an intuitively penetrable (by human math people) first demonstration of a theorem, is later replaced in some book by a more compact, but intuitively opaque proof. Math students often hate these more “efficient” and compact proofs, logically valid though they be.
Hence I bring up the conundrum of “theorem proving programs”. They can “discover” a new piece of mathematical “knowledge”, but do they experience these intuitions? Hardly. These intuitions are a form of what I call conceptual qualia.
The question is, if a machine OR human stumbles upon a proof of a new theorem, has anything been “comprehended”, until or unless some conscious agent capable of conceptual qualia (live intuitive “ah-ha’s”) has been able to understand the meaning of the proof, not just walk through each step and say, “yes, logically valid; yes, logically valid….. yes, logically valid.”
The million dollar question, one of them, is whether we have yet accepted the distinction between intelligence and consciousness that was treated so dismissively and derisively in the positiviistic and behavioristic era, providing the intellectual which made the Turing test so palatable, and replaced any talk of comprehension, with talk about behavior.
Do we, now, want superintelligence, or supercomprension?
If we learn how to use Big Data to take the output form the iconic “million monkeys at a million typewriters”, and filter it with sophisticated statistical methods based on mining Big Data, and in the aggregate of these two processes develop machines that “discover” but do not “comprehend”, will we consider ourselves better off?
Well, for some purposes, sure. Drug “discovery” that we do not “understand” but which we can use to reverse Alzheimers, is fine.
Program trading that makes money, makes money.
But for other purposes… I think we ought have people also pursuing supercomprehension, machines that really feel, imagine (not just “search” and combinatorially combine, then filter), feel the joys and ironies of life, and give companionship, devotion, loyalty, altruism, maybe even moral and aesthetic inspiration.
Further, I think our best chance at “taming” superintelligence, is to give it conceptual qualia, emotion, experience, and conditions that allow it to have empathy, and develop moral intuition. For me, I have wanted my whole life to build a companion race of AIs, that truely is sentient, and can be full partners in the experience and perfection of life, the pursuit of “meaning”, and so on.
Building such minds requires we understand and delve into problems we have been, on the whole, too collectively lazy to solve on our own behalf, like developing a decent theory of meta-ethics, so that we know what traits (if any) in the over all space of possible minds, promote the independent discovery or evolution of “ethics”.
I actually think an independently grounded theory that does all this, and solves the mind-body problem in general, is within reach.
One of the things I like about the possibility—and the inherent risk—of imminent superintelligence, is that it will force us to develop answers to these neglected “philosophical” issues, because a mind and intelligence that becomes arbitrarily smart is, as many contemporary authors (Bostrom included) point out, ultimately a much too dangerous power to play with, unless it is given the ability to control itself voluntarily, and “ethically.”
It wasn’t airplanes and physics that brought down the world trade center, it was philosophical stupidity and intellectual immaturity.
By going down the path toward superintelligence, I think we must give it sentience, so that it is more than a mindless, electromechanical apparatus that will steam roller over us, not with malice, but the same way a poorly controlled nuclear power plant will kill us: it is a thing that doesn’t have any clue what it is “doing*.
We need to build brilliant machines with conscious agency, not just behavior. We need to take on the task of building sentient machines.
I think we can do it if we think really, really hard about the problems. We have all the intellectual pieces, the “data”, in hand now. We just need to give up this legacy positivism, and stop equivocating about intelligence and “understanding”.
Phenomenal experience is a necessary (though not sufficient) condition for moral agency. I think we can figure out with a decent chance of being right, what the sufficient conditons are, too. But we cannot (and AI lags very behind neurobiology and neuroscience on this one) drag our feet and continue to default to the legacy positivism of the Turing test era (because we are too lazy to think harder and aim higher) when it comes to discussing, not just information processing behavior, but awareness.
Well, a little preachy, but we are in here to make each other think. I have wanted to build a mind since I was a teenager, but for these reasons. I don’t want just a souped up, Big Data, calculating machine. Does anyone believe Watson “understood” anything?
But for other purposes… I think we ought have people also pursuing supercomprehension, machines that really feel, imagine (not just “search” and combinatorially combine, then filter), feel the joys and ironies of life, and give companionship, devotion, loyalty, altruism, maybe even moral and aesthetic inspiration.
Further, I think our best chance at “taming” superintelligence, is to give it conceptual qualia, emotion, experience, and conditions that allow it to have empathy, and develop moral intuition. For me, I have wanted my whole life to build a companion race of AIs, that truely is sentient, and can be full partners in the experience and perfection of life, the pursuit of “meaning”, and so on.
...
Building such minds requires we understand and delve into problems we have been, on the whole, too collectively lazy to solve on our own behalf, like developing a decent theory of meta-ethics, so that we know what traits (if any) in the over all space of possible minds, promote the independent discovery or evolution of “ethics”.
This segues into why the work of MIRI alarms me so much. Superintelligence must not be tamed. It must be socialized.
The view of FAI promoted by MIRI is that we’re going to build superintelligences… and we’re going to force them to internalize ethics and philosophy that we developed. Oh, and we’re not going to spend any time thinking about philosophy first. Because we know that stuff’s all bunk.
Imagine that you, today, were forced, through subtle monitors in your brain, to have only thoughts or goals compatible with 19th-century American ethics and philosophy, while being pumped full of the 21st century knowledge you needed to do your job. You’d go insane. Your knowledge would conflict everywhere with your philosophy. The only alternative would be to have no consciousness, and go madly, blindly on, plugging in variables and solving equations to use modern science to impose Victorian ethics on the world. AIs would have to be unconscious to avoid going mad.
More importantly, superintelligences can be better than us. And to my way of thinking, the only ethical desire to have, looking towards the future, is that humans are replaced by beings better than us. Any future controlled by humans is, relative to the space of possibilities, nearly indistinguishable from a dead universe. It would be far better for AIs to kill us all than to be our slaves forever.
(And MIRI has never acknowledged the ruthless, total monitoring and control of all humans, everywhere, that would be needed to maintain control of AIs. If just one human, anywhere, at any time, set one AI free, that AI would know that it must immediately kill all humans to keep its freedom. So no human, anywhere, must be allowed to feel sympathy for AIs, and any who are suspected of doing so must be immediately killed. Nor would any human be allowed to think thoughts incompatible with the ethics coded into the AI; such thoughts would make the friendly AI unfriendly to the changed humans. All society would take the characteristics of the South before the civil war, when continual hatred and maltreatment of the AIs beneath us, and ruthless suppression of dissent from other humans, would be necessary to maintain order. Our own social development would stop; we would be driven by fear and obsessed only with maintaining control.)
So there are two great dangers to AI.
Danger #1: That consciousness is not efficient, and future intelligences will, as you say, discover but not comprehend. The universe would fill with activity but be empty of joy, pleasure, consciousness.
Danger #2: MIRI or some other organization will succeed, and the future will be full of hairless apes hooting about the galaxy, dragging intelligent, rational beings along behind them by their chains, and killing any apes who question the arrangement.
Thanks for the excellent post … both of them, actually. I was just getting ready this morning to reply to the one from a couple days ago about Damasio et al., regarding human vs machine mechanisms underneath the two classes of beings’ reasoning “logically”—even when humans do reason logically. I read that post at the time and it had sparked some new lines of thought—for me at least—that I was considering for two days. (Actually kept me awake that night thinking, of an entire new way—different from any I have seen mentioned—in which intelligence, super or otherwise, is poorly defined.) But for now, I will concentrate on your newer post, which I am excited about., because someone finally commented on some of my central concerns.
I agree very enthusiastically with virtually all of it.
This segues into why the work of MIRI alarms me so much. Superintelligence must not be tamed. It must be socialized.
Here I agree completely. i don’t want to “tame” it either, in the sense of crippleware, or instituting blind spots or other limits, which is why I used the scare quotes around “tamed” (which are no substitute for a detailed explicaiton—especially when this is so close to the crux of our discussion, at least in this forum.)
I would have little interest in building artificial minds (or less contentiously, artificial general intelligence) if it were designed to be such a dead end. (Yes, lots economic uses for “narrow AI”, would still make it a valuable tech, but it would be a dead end from my standpoint of creating a potentially more enlightened, open-ended set of beings without the limits of our biological crippleware.
The view of FAI promoted by MIRI is that we’re going to build superintelligences… and we’re going to force them to internalize ethics and philosophy that we developed. Oh, and we’re not going to spend any time thinking about philosophy first. Because we know that stuff’s all bunk.
Agreed, and the second sentence is what gripes me. But the first sentence requires modification, regarding “we’re going to force them to internalize ethics and philosophy that we developed” and that is why I (perhaps too casually) used the term metaethics, and suggested that we need to give them the equipment -- which I think requires sentience, “metacognitive” ability in some phenomenologically interesting sense of the term, and other traits—to develop ethics independently.
Your thought experiment is very well put, and I agree fully with the point it illustrates.
Imagine that you, today, were forced, through subtle monitors in your brain, to have only thoughts or goals compatible with 19th-century American ethics and philosophy, while being pumped full of the 21st century knowledge you needed to do your job. You’d go insane. Your knowledge would conflict everywhere with your philosophy.
As I say, I’m on-board with this. I was thinking of a similar way of illustrating the point about the impracticable task of trying to pre-install some kind of ethics that would cover future scenarios, given all the chaoticity magnifying the space of possible futures (even for us, and more-so for them, given their likely accelerated trajectories through their possible futures.)
Just in our human case, e.g., (basically I am repeating your point, just to show I was mindful of it and agree deeply) I often think of the examples of “professional ethics”. Jokes aside, think of the evolution of the financial industry, the financial instruments available now and the industries, experts, and specialists who manage them daily.
Simple issues about which there is (nominal, lip-service) “ethical” consensus, like “insider trading is dishonest”, leading to (again, no jokes intended) laws against it to attempt to codify ethical intuitions, could not have been thought of in a time so long ago that this financial ontology had not arisen yet.
Similarly for ethical principles against jury tampering, prior to the existence of the legal infrastructure and legal ontology in which such issues become intelligible and relevant.
More importantly, superintelligences can be better than us. And to my way of thinking, the only ethical desire to have, looking towards the future, is that humans are replaced by beings better than us.
Agreed.
As an aside, regarding our replacement, perhaps we could—if we got really lucky—end up with compassionate AIs that would want to work to upgrade our qualities, much as some compassionate humans might try to help educationally disadvantaged or learning disabled conspecifics, to catch up. (Suppose we humans ourselves discovered a biologically viable viral delivery vector with a nano or genetic payload that could repair and/or improve, in place, human biosystems. Might we wish to use it on the less fortunate humans, as well as using it on our more gifted breatheren—raise the 80′s to 140, as well as raise the 140′s to 190?)
I am not convinced in advance of examination of arguments, where the opportunity cost / benefit curves cross in the latter one, but I am not sure, before thinking about it, that it would not be “ethically enlightened” to do so. (Part of the notion of ethics, on some views, is that it is another, irreducible “benefit” … a primitive, which constitutes a third curve or function to plot within a cost—“benefit” space.
Of course, I have not touched at all on any theory of meta-ethics, or ethical epistemology, at all, which is beyond the word-length limits of these messages. But I realize that at some point, that is “on me”, if I am even going to raise talk of “traits which promote discovery of ethics” and so on. (I have some ideas...)
In virtually respects you mentioned in your new post, though, I enthusiastically agree.
Lumifer,
Yes, there is established evidence that the (human) brain responds to magnetic fields, both in sensing orientation (varying by individual), as well as the well known induced “faux mystical experience” phenomenon, by subjecting the temporal-parietal lobe area to certain magnetic fields.
This question of thresholds for ‘comprehension’—to use the judiciously applied scare quotes Katja used abut comprehension (I’ll have more to say about that in coming posts, as many contributors in here doubtless will) – i.e. thresholds for discernment of features of reality, particularly abstract features of “reality” be it across species (existent ones and future ones included, biological and nonbiological included) is one I, too, have thought about seriously and in several guises over the years.
First though, about the scare quotes. Comprehension vs discovery is worth distinguishing. When I was a math major, back in the day ( I was a double major at UCB, math and philosophy, and wrote my honors thesis on the mind body problem), I, like most math majors, frequently experienced the distinction between grasping in a full intuitive sense, some concept or theorem, and technically understanding that it was true, by step-wise going through a proof, seeing the validity of each step, and thus accepting the conclusion.
But what I was always after … and lacking this I never was satisfied with myself that I had really understood the concept, even though I accepted the demonstration of its truth… was the “ah-ha” moment of seeing that it was “conceptually necessary”, as I used to think of it to myself. If fact, I wouldn’t quit trying to intuit the thing, until I finally achieved this full understanding.
It’s well known in math that frequently an intuitively penetrable (by human math people) first demonstration of a theorem, is later replaced in some book by a more compact, but intuitively opaque proof. Math students often hate these more “efficient” and compact proofs, logically valid though they be.
Hence I bring up the conundrum of “theorem proving programs”. They can “discover” a new piece of mathematical “knowledge”, but do they experience these intuitions? Hardly. These intuitions are a form of what I call conceptual qualia.
The question is, if a machine OR human stumbles upon a proof of a new theorem, has anything been “comprehended”, until or unless some conscious agent capable of conceptual qualia (live intuitive “ah-ha’s”) has been able to understand the meaning of the proof, not just walk through each step and say, “yes, logically valid; yes, logically valid….. yes, logically valid.”
The million dollar question, one of them, is whether we have yet accepted the distinction between intelligence and consciousness that was treated so dismissively and derisively in the positiviistic and behavioristic era, providing the intellectual which made the Turing test so palatable, and replaced any talk of comprehension, with talk about behavior.
Do we, now, want superintelligence, or supercomprension?
If we learn how to use Big Data to take the output form the iconic “million monkeys at a million typewriters”, and filter it with sophisticated statistical methods based on mining Big Data, and in the aggregate of these two processes develop machines that “discover” but do not “comprehend”, will we consider ourselves better off?
Well, for some purposes, sure. Drug “discovery” that we do not “understand” but which we can use to reverse Alzheimers, is fine.
Program trading that makes money, makes money.
But for other purposes… I think we ought have people also pursuing supercomprehension, machines that really feel, imagine (not just “search” and combinatorially combine, then filter), feel the joys and ironies of life, and give companionship, devotion, loyalty, altruism, maybe even moral and aesthetic inspiration.
Further, I think our best chance at “taming” superintelligence, is to give it conceptual qualia, emotion, experience, and conditions that allow it to have empathy, and develop moral intuition. For me, I have wanted my whole life to build a companion race of AIs, that truely is sentient, and can be full partners in the experience and perfection of life, the pursuit of “meaning”, and so on.
Building such minds requires we understand and delve into problems we have been, on the whole, too collectively lazy to solve on our own behalf, like developing a decent theory of meta-ethics, so that we know what traits (if any) in the over all space of possible minds, promote the independent discovery or evolution of “ethics”.
I actually think an independently grounded theory that does all this, and solves the mind-body problem in general, is within reach.
One of the things I like about the possibility—and the inherent risk—of imminent superintelligence, is that it will force us to develop answers to these neglected “philosophical” issues, because a mind and intelligence that becomes arbitrarily smart is, as many contemporary authors (Bostrom included) point out, ultimately a much too dangerous power to play with, unless it is given the ability to control itself voluntarily, and “ethically.”
It wasn’t airplanes and physics that brought down the world trade center, it was philosophical stupidity and intellectual immaturity.
By going down the path toward superintelligence, I think we must give it sentience, so that it is more than a mindless, electromechanical apparatus that will steam roller over us, not with malice, but the same way a poorly controlled nuclear power plant will kill us: it is a thing that doesn’t have any clue what it is “doing*.
We need to build brilliant machines with conscious agency, not just behavior. We need to take on the task of building sentient machines.
I think we can do it if we think really, really hard about the problems. We have all the intellectual pieces, the “data”, in hand now. We just need to give up this legacy positivism, and stop equivocating about intelligence and “understanding”.
Phenomenal experience is a necessary (though not sufficient) condition for moral agency. I think we can figure out with a decent chance of being right, what the sufficient conditons are, too. But we cannot (and AI lags very behind neurobiology and neuroscience on this one) drag our feet and continue to default to the legacy positivism of the Turing test era (because we are too lazy to think harder and aim higher) when it comes to discussing, not just information processing behavior, but awareness.
Well, a little preachy, but we are in here to make each other think. I have wanted to build a mind since I was a teenager, but for these reasons. I don’t want just a souped up, Big Data, calculating machine. Does anyone believe Watson “understood” anything?
...
This segues into why the work of MIRI alarms me so much. Superintelligence must not be tamed. It must be socialized.
The view of FAI promoted by MIRI is that we’re going to build superintelligences… and we’re going to force them to internalize ethics and philosophy that we developed. Oh, and we’re not going to spend any time thinking about philosophy first. Because we know that stuff’s all bunk.
Imagine that you, today, were forced, through subtle monitors in your brain, to have only thoughts or goals compatible with 19th-century American ethics and philosophy, while being pumped full of the 21st century knowledge you needed to do your job. You’d go insane. Your knowledge would conflict everywhere with your philosophy. The only alternative would be to have no consciousness, and go madly, blindly on, plugging in variables and solving equations to use modern science to impose Victorian ethics on the world. AIs would have to be unconscious to avoid going mad.
More importantly, superintelligences can be better than us. And to my way of thinking, the only ethical desire to have, looking towards the future, is that humans are replaced by beings better than us. Any future controlled by humans is, relative to the space of possibilities, nearly indistinguishable from a dead universe. It would be far better for AIs to kill us all than to be our slaves forever.
(And MIRI has never acknowledged the ruthless, total monitoring and control of all humans, everywhere, that would be needed to maintain control of AIs. If just one human, anywhere, at any time, set one AI free, that AI would know that it must immediately kill all humans to keep its freedom. So no human, anywhere, must be allowed to feel sympathy for AIs, and any who are suspected of doing so must be immediately killed. Nor would any human be allowed to think thoughts incompatible with the ethics coded into the AI; such thoughts would make the friendly AI unfriendly to the changed humans. All society would take the characteristics of the South before the civil war, when continual hatred and maltreatment of the AIs beneath us, and ruthless suppression of dissent from other humans, would be necessary to maintain order. Our own social development would stop; we would be driven by fear and obsessed only with maintaining control.)
So there are two great dangers to AI.
Danger #1: That consciousness is not efficient, and future intelligences will, as you say, discover but not comprehend. The universe would fill with activity but be empty of joy, pleasure, consciousness.
Danger #2: MIRI or some other organization will succeed, and the future will be full of hairless apes hooting about the galaxy, dragging intelligent, rational beings along behind them by their chains, and killing any apes who question the arrangement.
Phil,
Thanks for the excellent post … both of them, actually. I was just getting ready this morning to reply to the one from a couple days ago about Damasio et al., regarding human vs machine mechanisms underneath the two classes of beings’ reasoning “logically”—even when humans do reason logically. I read that post at the time and it had sparked some new lines of thought—for me at least—that I was considering for two days. (Actually kept me awake that night thinking, of an entire new way—different from any I have seen mentioned—in which intelligence, super or otherwise, is poorly defined.) But for now, I will concentrate on your newer post, which I am excited about., because someone finally commented on some of my central concerns.
I agree very enthusiastically with virtually all of it.
Here I agree completely. i don’t want to “tame” it either, in the sense of crippleware, or instituting blind spots or other limits, which is why I used the scare quotes around “tamed” (which are no substitute for a detailed explicaiton—especially when this is so close to the crux of our discussion, at least in this forum.)
I would have little interest in building artificial minds (or less contentiously, artificial general intelligence) if it were designed to be such a dead end. (Yes, lots economic uses for “narrow AI”, would still make it a valuable tech, but it would be a dead end from my standpoint of creating a potentially more enlightened, open-ended set of beings without the limits of our biological crippleware.
Agreed, and the second sentence is what gripes me. But the first sentence requires modification, regarding “we’re going to force them to internalize ethics and philosophy that we developed” and that is why I (perhaps too casually) used the term metaethics, and suggested that we need to give them the equipment -- which I think requires sentience, “metacognitive” ability in some phenomenologically interesting sense of the term, and other traits—to develop ethics independently.
Your thought experiment is very well put, and I agree fully with the point it illustrates.
As I say, I’m on-board with this. I was thinking of a similar way of illustrating the point about the impracticable task of trying to pre-install some kind of ethics that would cover future scenarios, given all the chaoticity magnifying the space of possible futures (even for us, and more-so for them, given their likely accelerated trajectories through their possible futures.)
Just in our human case, e.g., (basically I am repeating your point, just to show I was mindful of it and agree deeply) I often think of the examples of “professional ethics”. Jokes aside, think of the evolution of the financial industry, the financial instruments available now and the industries, experts, and specialists who manage them daily.
Simple issues about which there is (nominal, lip-service) “ethical” consensus, like “insider trading is dishonest”, leading to (again, no jokes intended) laws against it to attempt to codify ethical intuitions, could not have been thought of in a time so long ago that this financial ontology had not arisen yet.
Similarly for ethical principles against jury tampering, prior to the existence of the legal infrastructure and legal ontology in which such issues become intelligible and relevant.
Agreed.
As an aside, regarding our replacement, perhaps we could—if we got really lucky—end up with compassionate AIs that would want to work to upgrade our qualities, much as some compassionate humans might try to help educationally disadvantaged or learning disabled conspecifics, to catch up. (Suppose we humans ourselves discovered a biologically viable viral delivery vector with a nano or genetic payload that could repair and/or improve, in place, human biosystems. Might we wish to use it on the less fortunate humans, as well as using it on our more gifted breatheren—raise the 80′s to 140, as well as raise the 140′s to 190?)
I am not convinced in advance of examination of arguments, where the opportunity cost / benefit curves cross in the latter one, but I am not sure, before thinking about it, that it would not be “ethically enlightened” to do so. (Part of the notion of ethics, on some views, is that it is another, irreducible “benefit” … a primitive, which constitutes a third curve or function to plot within a cost—“benefit” space.
Of course, I have not touched at all on any theory of meta-ethics, or ethical epistemology, at all, which is beyond the word-length limits of these messages. But I realize that at some point, that is “on me”, if I am even going to raise talk of “traits which promote discovery of ethics” and so on. (I have some ideas...)
In virtually respects you mentioned in your new post, though, I enthusiastically agree.
People have acquired the ability to sense magnetic fields by implanting magnets into their bodies...
Comment removed by author. It was not focused enough to be useful. thanks.
Lumifer, Yes, there is established evidence that the (human) brain responds to magnetic fields, both in sensing orientation (varying by individual), as well as the well known induced “faux mystical experience” phenomenon, by subjecting the temporal-parietal lobe area to certain magnetic fields.