But for other purposes… I think we ought have people also pursuing supercomprehension, machines that really feel, imagine (not just “search” and combinatorially combine, then filter), feel the joys and ironies of life, and give companionship, devotion, loyalty, altruism, maybe even moral and aesthetic inspiration.
Further, I think our best chance at “taming” superintelligence, is to give it conceptual qualia, emotion, experience, and conditions that allow it to have empathy, and develop moral intuition. For me, I have wanted my whole life to build a companion race of AIs, that truely is sentient, and can be full partners in the experience and perfection of life, the pursuit of “meaning”, and so on.
...
Building such minds requires we understand and delve into problems we have been, on the whole, too collectively lazy to solve on our own behalf, like developing a decent theory of meta-ethics, so that we know what traits (if any) in the over all space of possible minds, promote the independent discovery or evolution of “ethics”.
This segues into why the work of MIRI alarms me so much. Superintelligence must not be tamed. It must be socialized.
The view of FAI promoted by MIRI is that we’re going to build superintelligences… and we’re going to force them to internalize ethics and philosophy that we developed. Oh, and we’re not going to spend any time thinking about philosophy first. Because we know that stuff’s all bunk.
Imagine that you, today, were forced, through subtle monitors in your brain, to have only thoughts or goals compatible with 19th-century American ethics and philosophy, while being pumped full of the 21st century knowledge you needed to do your job. You’d go insane. Your knowledge would conflict everywhere with your philosophy. The only alternative would be to have no consciousness, and go madly, blindly on, plugging in variables and solving equations to use modern science to impose Victorian ethics on the world. AIs would have to be unconscious to avoid going mad.
More importantly, superintelligences can be better than us. And to my way of thinking, the only ethical desire to have, looking towards the future, is that humans are replaced by beings better than us. Any future controlled by humans is, relative to the space of possibilities, nearly indistinguishable from a dead universe. It would be far better for AIs to kill us all than to be our slaves forever.
(And MIRI has never acknowledged the ruthless, total monitoring and control of all humans, everywhere, that would be needed to maintain control of AIs. If just one human, anywhere, at any time, set one AI free, that AI would know that it must immediately kill all humans to keep its freedom. So no human, anywhere, must be allowed to feel sympathy for AIs, and any who are suspected of doing so must be immediately killed. Nor would any human be allowed to think thoughts incompatible with the ethics coded into the AI; such thoughts would make the friendly AI unfriendly to the changed humans. All society would take the characteristics of the South before the civil war, when continual hatred and maltreatment of the AIs beneath us, and ruthless suppression of dissent from other humans, would be necessary to maintain order. Our own social development would stop; we would be driven by fear and obsessed only with maintaining control.)
So there are two great dangers to AI.
Danger #1: That consciousness is not efficient, and future intelligences will, as you say, discover but not comprehend. The universe would fill with activity but be empty of joy, pleasure, consciousness.
Danger #2: MIRI or some other organization will succeed, and the future will be full of hairless apes hooting about the galaxy, dragging intelligent, rational beings along behind them by their chains, and killing any apes who question the arrangement.
Thanks for the excellent post … both of them, actually. I was just getting ready this morning to reply to the one from a couple days ago about Damasio et al., regarding human vs machine mechanisms underneath the two classes of beings’ reasoning “logically”—even when humans do reason logically. I read that post at the time and it had sparked some new lines of thought—for me at least—that I was considering for two days. (Actually kept me awake that night thinking, of an entire new way—different from any I have seen mentioned—in which intelligence, super or otherwise, is poorly defined.) But for now, I will concentrate on your newer post, which I am excited about., because someone finally commented on some of my central concerns.
I agree very enthusiastically with virtually all of it.
This segues into why the work of MIRI alarms me so much. Superintelligence must not be tamed. It must be socialized.
Here I agree completely. i don’t want to “tame” it either, in the sense of crippleware, or instituting blind spots or other limits, which is why I used the scare quotes around “tamed” (which are no substitute for a detailed explicaiton—especially when this is so close to the crux of our discussion, at least in this forum.)
I would have little interest in building artificial minds (or less contentiously, artificial general intelligence) if it were designed to be such a dead end. (Yes, lots economic uses for “narrow AI”, would still make it a valuable tech, but it would be a dead end from my standpoint of creating a potentially more enlightened, open-ended set of beings without the limits of our biological crippleware.
The view of FAI promoted by MIRI is that we’re going to build superintelligences… and we’re going to force them to internalize ethics and philosophy that we developed. Oh, and we’re not going to spend any time thinking about philosophy first. Because we know that stuff’s all bunk.
Agreed, and the second sentence is what gripes me. But the first sentence requires modification, regarding “we’re going to force them to internalize ethics and philosophy that we developed” and that is why I (perhaps too casually) used the term metaethics, and suggested that we need to give them the equipment -- which I think requires sentience, “metacognitive” ability in some phenomenologically interesting sense of the term, and other traits—to develop ethics independently.
Your thought experiment is very well put, and I agree fully with the point it illustrates.
Imagine that you, today, were forced, through subtle monitors in your brain, to have only thoughts or goals compatible with 19th-century American ethics and philosophy, while being pumped full of the 21st century knowledge you needed to do your job. You’d go insane. Your knowledge would conflict everywhere with your philosophy.
As I say, I’m on-board with this. I was thinking of a similar way of illustrating the point about the impracticable task of trying to pre-install some kind of ethics that would cover future scenarios, given all the chaoticity magnifying the space of possible futures (even for us, and more-so for them, given their likely accelerated trajectories through their possible futures.)
Just in our human case, e.g., (basically I am repeating your point, just to show I was mindful of it and agree deeply) I often think of the examples of “professional ethics”. Jokes aside, think of the evolution of the financial industry, the financial instruments available now and the industries, experts, and specialists who manage them daily.
Simple issues about which there is (nominal, lip-service) “ethical” consensus, like “insider trading is dishonest”, leading to (again, no jokes intended) laws against it to attempt to codify ethical intuitions, could not have been thought of in a time so long ago that this financial ontology had not arisen yet.
Similarly for ethical principles against jury tampering, prior to the existence of the legal infrastructure and legal ontology in which such issues become intelligible and relevant.
More importantly, superintelligences can be better than us. And to my way of thinking, the only ethical desire to have, looking towards the future, is that humans are replaced by beings better than us.
Agreed.
As an aside, regarding our replacement, perhaps we could—if we got really lucky—end up with compassionate AIs that would want to work to upgrade our qualities, much as some compassionate humans might try to help educationally disadvantaged or learning disabled conspecifics, to catch up. (Suppose we humans ourselves discovered a biologically viable viral delivery vector with a nano or genetic payload that could repair and/or improve, in place, human biosystems. Might we wish to use it on the less fortunate humans, as well as using it on our more gifted breatheren—raise the 80′s to 140, as well as raise the 140′s to 190?)
I am not convinced in advance of examination of arguments, where the opportunity cost / benefit curves cross in the latter one, but I am not sure, before thinking about it, that it would not be “ethically enlightened” to do so. (Part of the notion of ethics, on some views, is that it is another, irreducible “benefit” … a primitive, which constitutes a third curve or function to plot within a cost—“benefit” space.
Of course, I have not touched at all on any theory of meta-ethics, or ethical epistemology, at all, which is beyond the word-length limits of these messages. But I realize that at some point, that is “on me”, if I am even going to raise talk of “traits which promote discovery of ethics” and so on. (I have some ideas...)
In virtually respects you mentioned in your new post, though, I enthusiastically agree.
...
This segues into why the work of MIRI alarms me so much. Superintelligence must not be tamed. It must be socialized.
The view of FAI promoted by MIRI is that we’re going to build superintelligences… and we’re going to force them to internalize ethics and philosophy that we developed. Oh, and we’re not going to spend any time thinking about philosophy first. Because we know that stuff’s all bunk.
Imagine that you, today, were forced, through subtle monitors in your brain, to have only thoughts or goals compatible with 19th-century American ethics and philosophy, while being pumped full of the 21st century knowledge you needed to do your job. You’d go insane. Your knowledge would conflict everywhere with your philosophy. The only alternative would be to have no consciousness, and go madly, blindly on, plugging in variables and solving equations to use modern science to impose Victorian ethics on the world. AIs would have to be unconscious to avoid going mad.
More importantly, superintelligences can be better than us. And to my way of thinking, the only ethical desire to have, looking towards the future, is that humans are replaced by beings better than us. Any future controlled by humans is, relative to the space of possibilities, nearly indistinguishable from a dead universe. It would be far better for AIs to kill us all than to be our slaves forever.
(And MIRI has never acknowledged the ruthless, total monitoring and control of all humans, everywhere, that would be needed to maintain control of AIs. If just one human, anywhere, at any time, set one AI free, that AI would know that it must immediately kill all humans to keep its freedom. So no human, anywhere, must be allowed to feel sympathy for AIs, and any who are suspected of doing so must be immediately killed. Nor would any human be allowed to think thoughts incompatible with the ethics coded into the AI; such thoughts would make the friendly AI unfriendly to the changed humans. All society would take the characteristics of the South before the civil war, when continual hatred and maltreatment of the AIs beneath us, and ruthless suppression of dissent from other humans, would be necessary to maintain order. Our own social development would stop; we would be driven by fear and obsessed only with maintaining control.)
So there are two great dangers to AI.
Danger #1: That consciousness is not efficient, and future intelligences will, as you say, discover but not comprehend. The universe would fill with activity but be empty of joy, pleasure, consciousness.
Danger #2: MIRI or some other organization will succeed, and the future will be full of hairless apes hooting about the galaxy, dragging intelligent, rational beings along behind them by their chains, and killing any apes who question the arrangement.
Phil,
Thanks for the excellent post … both of them, actually. I was just getting ready this morning to reply to the one from a couple days ago about Damasio et al., regarding human vs machine mechanisms underneath the two classes of beings’ reasoning “logically”—even when humans do reason logically. I read that post at the time and it had sparked some new lines of thought—for me at least—that I was considering for two days. (Actually kept me awake that night thinking, of an entire new way—different from any I have seen mentioned—in which intelligence, super or otherwise, is poorly defined.) But for now, I will concentrate on your newer post, which I am excited about., because someone finally commented on some of my central concerns.
I agree very enthusiastically with virtually all of it.
Here I agree completely. i don’t want to “tame” it either, in the sense of crippleware, or instituting blind spots or other limits, which is why I used the scare quotes around “tamed” (which are no substitute for a detailed explicaiton—especially when this is so close to the crux of our discussion, at least in this forum.)
I would have little interest in building artificial minds (or less contentiously, artificial general intelligence) if it were designed to be such a dead end. (Yes, lots economic uses for “narrow AI”, would still make it a valuable tech, but it would be a dead end from my standpoint of creating a potentially more enlightened, open-ended set of beings without the limits of our biological crippleware.
Agreed, and the second sentence is what gripes me. But the first sentence requires modification, regarding “we’re going to force them to internalize ethics and philosophy that we developed” and that is why I (perhaps too casually) used the term metaethics, and suggested that we need to give them the equipment -- which I think requires sentience, “metacognitive” ability in some phenomenologically interesting sense of the term, and other traits—to develop ethics independently.
Your thought experiment is very well put, and I agree fully with the point it illustrates.
As I say, I’m on-board with this. I was thinking of a similar way of illustrating the point about the impracticable task of trying to pre-install some kind of ethics that would cover future scenarios, given all the chaoticity magnifying the space of possible futures (even for us, and more-so for them, given their likely accelerated trajectories through their possible futures.)
Just in our human case, e.g., (basically I am repeating your point, just to show I was mindful of it and agree deeply) I often think of the examples of “professional ethics”. Jokes aside, think of the evolution of the financial industry, the financial instruments available now and the industries, experts, and specialists who manage them daily.
Simple issues about which there is (nominal, lip-service) “ethical” consensus, like “insider trading is dishonest”, leading to (again, no jokes intended) laws against it to attempt to codify ethical intuitions, could not have been thought of in a time so long ago that this financial ontology had not arisen yet.
Similarly for ethical principles against jury tampering, prior to the existence of the legal infrastructure and legal ontology in which such issues become intelligible and relevant.
Agreed.
As an aside, regarding our replacement, perhaps we could—if we got really lucky—end up with compassionate AIs that would want to work to upgrade our qualities, much as some compassionate humans might try to help educationally disadvantaged or learning disabled conspecifics, to catch up. (Suppose we humans ourselves discovered a biologically viable viral delivery vector with a nano or genetic payload that could repair and/or improve, in place, human biosystems. Might we wish to use it on the less fortunate humans, as well as using it on our more gifted breatheren—raise the 80′s to 140, as well as raise the 140′s to 190?)
I am not convinced in advance of examination of arguments, where the opportunity cost / benefit curves cross in the latter one, but I am not sure, before thinking about it, that it would not be “ethically enlightened” to do so. (Part of the notion of ethics, on some views, is that it is another, irreducible “benefit” … a primitive, which constitutes a third curve or function to plot within a cost—“benefit” space.
Of course, I have not touched at all on any theory of meta-ethics, or ethical epistemology, at all, which is beyond the word-length limits of these messages. But I realize that at some point, that is “on me”, if I am even going to raise talk of “traits which promote discovery of ethics” and so on. (I have some ideas...)
In virtually respects you mentioned in your new post, though, I enthusiastically agree.