I’d like to mention three aspects. The first two point to a somewhat optimistic direction, while the third one is very much in the air.
ASI(s) would probably explore and adopt some kind of ethics
Assuming that it is not a singleton (and also taking into account that a singleton also has an internal “society of mind”), ASIs would need to deal with various potential conflicting interests and viewpoints, and would also face existential risks of their own (very powerful entities can easily destroy their reality, themselves, and everything in the vicinity, if they are not careful).
It seems that some kind of ethics is necessary to handle complicated situations like this, so it is likely that ASIs will explore ethical issues (or they would need to figure out a replacement for ethics).
ASI(s) would probably have access to direct human and animal experiences
Here I am disagreeing with
An ASI likely won’t have a human body and direct experiences of pain and pleasure and emotions—it won’t be able to “try things on” to verify if its reasoning on ethics is “correct”
The reason is that some ASIs are likely to be curious enough to explore hybrid consciousness with biological entities via brain-computer interfaces and such, and, as a result, would have the ability to directly experience the inner world of biological entities.
The question here is whether we should try to accelerate this path from our side (I tend to think that this can be done relatively fast via high-end non-invasive BCI, but risks associated with this path are pretty high).
There is still a gap
The previous two aspects do point in a somewhat optimistic direction (ASIs are likely to develop ethics or some equivalent, and they are likely to know how we feel inside, and we might also be able to assist these developments and probably should).
But this is still not enough for us. What would it take for this ethics to adequately take interests of humans into account? That’s a rather long and involved topic, and I’ve seen various proposals, but I don’t think we know (it’s not like our present society is sufficiently taking interests of humans into account; we would really like the future to do better than that).
Thank you for the comment! You bring up some interesting things. To your first point, I guess this could be added to the “For an ASI figuring out ethics” list, i.e., that an ASI would likely be motivated to figure out some system of ethics based on the existential risks it itself faces. However, by “figuring out ethics,” I really mean figuring out a system of ethics agreeable to humans (or “aligned” with humans) (I probably should’ve made this explicit in my post). Further, I’d really like it if the ASI(s) “lived” by that system. It’s not clear to me that an ASI being worried about existential risks for itself would translate to that. (Which I think is your third point.) The way I see it, humans only care about ethics because of the possibility of pain (and death). I put “and death” in parentheses because I don’t think we actually care directly about death, we care about the emotional pain that comes when thinking about our own death/the deaths of others (and whether death will involve significant physical pain leading up to it).
This leads to your second point—what you mention would seem to fall under “Info an ASI will likely have” number 8: “…the ability to run experiments on people” with the useful addition of “and animals, too.” I hadn’t thought about an ASI having hybrid consciousness in the way you mention (to this point, see below). I have two concerns with this: one is that it’d likely take some time, during which the ASI may unknowingly do unethical things. The second concern is more important, I think: being able to get the experience of pain when you want to is significantly different from not being able to control the pain. I’m not sure that a “curious” ASI getting an experience of pain (and other human/animal things) would translate into an empathic ASI that would want our lives to “go well.” But these are interesting things to think about, thanks for bringing them up!
One thing that makes it difficult for me personally to imagine what an ASI (in particular, the first one or few) might do is what hardware it might be built on (classical computers, quantum computers, biology-based computers, some combination of systems, etc.) Also, I’m very sketchy on what might motivate an ASI—which is related to the hardware question, since our human biological “hardware” is ultimately where human motivations come from. It’s difficult for me to see beyond an ASI just following some goal(s) we effectively give it to start with, like any old computer program, but way more complicated, of course. This leads to thoughts of goal misspecification and emergent properties, but I won’t get into those.
If, to give it its own motivation, an ASI is built from the start as a human hybrid, we better all hope they pick the right human for the job!
If, to give it its own motivation, an ASI is built from the start as a human hybrid, we better all hope they pick the right human for the job!
Right.
Basically, however one slices it, I think that the idea that superintelligent entities will subordinate their interests, values, and goals to those of unmodified humans is completely unrealistic (and trying to force it is probably quite unethical, in addition to being unrealistic).
So what we need is for superintelligent entities to adequately take interests of “lesser beings” into account.
So we actually need them to have a much stronger ethics compared to typical human ethics (our track record of taking interests of “lesser beings” into account is really bad; if superintelligence entities end up having ethics as defective as typical human ethics, things will not go well for us).
Yes, I sure hope ASI has stronger human-like ethics than humans do! In the meantime, it’d be nice if we could figure out how to raise human ethics as well.
I’d like to mention three aspects. The first two point to a somewhat optimistic direction, while the third one is very much in the air.
ASI(s) would probably explore and adopt some kind of ethics
Assuming that it is not a singleton (and also taking into account that a singleton also has an internal “society of mind”), ASIs would need to deal with various potential conflicting interests and viewpoints, and would also face existential risks of their own (very powerful entities can easily destroy their reality, themselves, and everything in the vicinity, if they are not careful).
It seems that some kind of ethics is necessary to handle complicated situations like this, so it is likely that ASIs will explore ethical issues (or they would need to figure out a replacement for ethics).
The question is whether what we do before ASI arrival can make things better (or worse) in this sense (I wrote a somewhat longer exploration of that last year: Exploring non-anthropocentric aspects of AI existential safety)
ASI(s) would probably have access to direct human and animal experiences
Here I am disagreeing with
The reason is that some ASIs are likely to be curious enough to explore hybrid consciousness with biological entities via brain-computer interfaces and such, and, as a result, would have the ability to directly experience the inner world of biological entities.
The question here is whether we should try to accelerate this path from our side (I tend to think that this can be done relatively fast via high-end non-invasive BCI, but risks associated with this path are pretty high).
There is still a gap
The previous two aspects do point in a somewhat optimistic direction (ASIs are likely to develop ethics or some equivalent, and they are likely to know how we feel inside, and we might also be able to assist these developments and probably should).
But this is still not enough for us. What would it take for this ethics to adequately take interests of humans into account? That’s a rather long and involved topic, and I’ve seen various proposals, but I don’t think we know (it’s not like our present society is sufficiently taking interests of humans into account; we would really like the future to do better than that).
Thank you for the comment! You bring up some interesting things. To your first point, I guess this could be added to the “For an ASI figuring out ethics” list, i.e., that an ASI would likely be motivated to figure out some system of ethics based on the existential risks it itself faces. However, by “figuring out ethics,” I really mean figuring out a system of ethics agreeable to humans (or “aligned” with humans) (I probably should’ve made this explicit in my post). Further, I’d really like it if the ASI(s) “lived” by that system. It’s not clear to me that an ASI being worried about existential risks for itself would translate to that. (Which I think is your third point.) The way I see it, humans only care about ethics because of the possibility of pain (and death). I put “and death” in parentheses because I don’t think we actually care directly about death, we care about the emotional pain that comes when thinking about our own death/the deaths of others (and whether death will involve significant physical pain leading up to it).
This leads to your second point—what you mention would seem to fall under “Info an ASI will likely have” number 8: “…the ability to run experiments on people” with the useful addition of “and animals, too.” I hadn’t thought about an ASI having hybrid consciousness in the way you mention (to this point, see below). I have two concerns with this: one is that it’d likely take some time, during which the ASI may unknowingly do unethical things. The second concern is more important, I think: being able to get the experience of pain when you want to is significantly different from not being able to control the pain. I’m not sure that a “curious” ASI getting an experience of pain (and other human/animal things) would translate into an empathic ASI that would want our lives to “go well.” But these are interesting things to think about, thanks for bringing them up!
One thing that makes it difficult for me personally to imagine what an ASI (in particular, the first one or few) might do is what hardware it might be built on (classical computers, quantum computers, biology-based computers, some combination of systems, etc.) Also, I’m very sketchy on what might motivate an ASI—which is related to the hardware question, since our human biological “hardware” is ultimately where human motivations come from. It’s difficult for me to see beyond an ASI just following some goal(s) we effectively give it to start with, like any old computer program, but way more complicated, of course. This leads to thoughts of goal misspecification and emergent properties, but I won’t get into those.
If, to give it its own motivation, an ASI is built from the start as a human hybrid, we better all hope they pick the right human for the job!
Right.
Basically, however one slices it, I think that the idea that superintelligent entities will subordinate their interests, values, and goals to those of unmodified humans is completely unrealistic (and trying to force it is probably quite unethical, in addition to being unrealistic).
So what we need is for superintelligent entities to adequately take interests of “lesser beings” into account.
So we actually need them to have a much stronger ethics compared to typical human ethics (our track record of taking interests of “lesser beings” into account is really bad; if superintelligence entities end up having ethics as defective as typical human ethics, things will not go well for us).
Yes, I sure hope ASI has stronger human-like ethics than humans do! In the meantime, it’d be nice if we could figure out how to raise human ethics as well.