Thanks for the excellent post … both of them, actually. I was just getting ready this morning to reply to the one from a couple days ago about Damasio et al., regarding human vs machine mechanisms underneath the two classes of beings’ reasoning “logically”—even when humans do reason logically. I read that post at the time and it had sparked some new lines of thought—for me at least—that I was considering for two days. (Actually kept me awake that night thinking, of an entire new way—different from any I have seen mentioned—in which intelligence, super or otherwise, is poorly defined.) But for now, I will concentrate on your newer post, which I am excited about., because someone finally commented on some of my central concerns.
I agree very enthusiastically with virtually all of it.
This segues into why the work of MIRI alarms me so much. Superintelligence must not be tamed. It must be socialized.
Here I agree completely. i don’t want to “tame” it either, in the sense of crippleware, or instituting blind spots or other limits, which is why I used the scare quotes around “tamed” (which are no substitute for a detailed explicaiton—especially when this is so close to the crux of our discussion, at least in this forum.)
I would have little interest in building artificial minds (or less contentiously, artificial general intelligence) if it were designed to be such a dead end. (Yes, lots economic uses for “narrow AI”, would still make it a valuable tech, but it would be a dead end from my standpoint of creating a potentially more enlightened, open-ended set of beings without the limits of our biological crippleware.
The view of FAI promoted by MIRI is that we’re going to build superintelligences… and we’re going to force them to internalize ethics and philosophy that we developed. Oh, and we’re not going to spend any time thinking about philosophy first. Because we know that stuff’s all bunk.
Agreed, and the second sentence is what gripes me. But the first sentence requires modification, regarding “we’re going to force them to internalize ethics and philosophy that we developed” and that is why I (perhaps too casually) used the term metaethics, and suggested that we need to give them the equipment -- which I think requires sentience, “metacognitive” ability in some phenomenologically interesting sense of the term, and other traits—to develop ethics independently.
Your thought experiment is very well put, and I agree fully with the point it illustrates.
Imagine that you, today, were forced, through subtle monitors in your brain, to have only thoughts or goals compatible with 19th-century American ethics and philosophy, while being pumped full of the 21st century knowledge you needed to do your job. You’d go insane. Your knowledge would conflict everywhere with your philosophy.
As I say, I’m on-board with this. I was thinking of a similar way of illustrating the point about the impracticable task of trying to pre-install some kind of ethics that would cover future scenarios, given all the chaoticity magnifying the space of possible futures (even for us, and more-so for them, given their likely accelerated trajectories through their possible futures.)
Just in our human case, e.g., (basically I am repeating your point, just to show I was mindful of it and agree deeply) I often think of the examples of “professional ethics”. Jokes aside, think of the evolution of the financial industry, the financial instruments available now and the industries, experts, and specialists who manage them daily.
Simple issues about which there is (nominal, lip-service) “ethical” consensus, like “insider trading is dishonest”, leading to (again, no jokes intended) laws against it to attempt to codify ethical intuitions, could not have been thought of in a time so long ago that this financial ontology had not arisen yet.
Similarly for ethical principles against jury tampering, prior to the existence of the legal infrastructure and legal ontology in which such issues become intelligible and relevant.
More importantly, superintelligences can be better than us. And to my way of thinking, the only ethical desire to have, looking towards the future, is that humans are replaced by beings better than us.
Agreed.
As an aside, regarding our replacement, perhaps we could—if we got really lucky—end up with compassionate AIs that would want to work to upgrade our qualities, much as some compassionate humans might try to help educationally disadvantaged or learning disabled conspecifics, to catch up. (Suppose we humans ourselves discovered a biologically viable viral delivery vector with a nano or genetic payload that could repair and/or improve, in place, human biosystems. Might we wish to use it on the less fortunate humans, as well as using it on our more gifted breatheren—raise the 80′s to 140, as well as raise the 140′s to 190?)
I am not convinced in advance of examination of arguments, where the opportunity cost / benefit curves cross in the latter one, but I am not sure, before thinking about it, that it would not be “ethically enlightened” to do so. (Part of the notion of ethics, on some views, is that it is another, irreducible “benefit” … a primitive, which constitutes a third curve or function to plot within a cost—“benefit” space.
Of course, I have not touched at all on any theory of meta-ethics, or ethical epistemology, at all, which is beyond the word-length limits of these messages. But I realize that at some point, that is “on me”, if I am even going to raise talk of “traits which promote discovery of ethics” and so on. (I have some ideas...)
In virtually respects you mentioned in your new post, though, I enthusiastically agree.
Phil,
Thanks for the excellent post … both of them, actually. I was just getting ready this morning to reply to the one from a couple days ago about Damasio et al., regarding human vs machine mechanisms underneath the two classes of beings’ reasoning “logically”—even when humans do reason logically. I read that post at the time and it had sparked some new lines of thought—for me at least—that I was considering for two days. (Actually kept me awake that night thinking, of an entire new way—different from any I have seen mentioned—in which intelligence, super or otherwise, is poorly defined.) But for now, I will concentrate on your newer post, which I am excited about., because someone finally commented on some of my central concerns.
I agree very enthusiastically with virtually all of it.
Here I agree completely. i don’t want to “tame” it either, in the sense of crippleware, or instituting blind spots or other limits, which is why I used the scare quotes around “tamed” (which are no substitute for a detailed explicaiton—especially when this is so close to the crux of our discussion, at least in this forum.)
I would have little interest in building artificial minds (or less contentiously, artificial general intelligence) if it were designed to be such a dead end. (Yes, lots economic uses for “narrow AI”, would still make it a valuable tech, but it would be a dead end from my standpoint of creating a potentially more enlightened, open-ended set of beings without the limits of our biological crippleware.
Agreed, and the second sentence is what gripes me. But the first sentence requires modification, regarding “we’re going to force them to internalize ethics and philosophy that we developed” and that is why I (perhaps too casually) used the term metaethics, and suggested that we need to give them the equipment -- which I think requires sentience, “metacognitive” ability in some phenomenologically interesting sense of the term, and other traits—to develop ethics independently.
Your thought experiment is very well put, and I agree fully with the point it illustrates.
As I say, I’m on-board with this. I was thinking of a similar way of illustrating the point about the impracticable task of trying to pre-install some kind of ethics that would cover future scenarios, given all the chaoticity magnifying the space of possible futures (even for us, and more-so for them, given their likely accelerated trajectories through their possible futures.)
Just in our human case, e.g., (basically I am repeating your point, just to show I was mindful of it and agree deeply) I often think of the examples of “professional ethics”. Jokes aside, think of the evolution of the financial industry, the financial instruments available now and the industries, experts, and specialists who manage them daily.
Simple issues about which there is (nominal, lip-service) “ethical” consensus, like “insider trading is dishonest”, leading to (again, no jokes intended) laws against it to attempt to codify ethical intuitions, could not have been thought of in a time so long ago that this financial ontology had not arisen yet.
Similarly for ethical principles against jury tampering, prior to the existence of the legal infrastructure and legal ontology in which such issues become intelligible and relevant.
Agreed.
As an aside, regarding our replacement, perhaps we could—if we got really lucky—end up with compassionate AIs that would want to work to upgrade our qualities, much as some compassionate humans might try to help educationally disadvantaged or learning disabled conspecifics, to catch up. (Suppose we humans ourselves discovered a biologically viable viral delivery vector with a nano or genetic payload that could repair and/or improve, in place, human biosystems. Might we wish to use it on the less fortunate humans, as well as using it on our more gifted breatheren—raise the 80′s to 140, as well as raise the 140′s to 190?)
I am not convinced in advance of examination of arguments, where the opportunity cost / benefit curves cross in the latter one, but I am not sure, before thinking about it, that it would not be “ethically enlightened” to do so. (Part of the notion of ethics, on some views, is that it is another, irreducible “benefit” … a primitive, which constitutes a third curve or function to plot within a cost—“benefit” space.
Of course, I have not touched at all on any theory of meta-ethics, or ethical epistemology, at all, which is beyond the word-length limits of these messages. But I realize that at some point, that is “on me”, if I am even going to raise talk of “traits which promote discovery of ethics” and so on. (I have some ideas...)
In virtually respects you mentioned in your new post, though, I enthusiastically agree.