If virtualizing people is violence (since it does imply copying their brains and, uh, removing the physical original) you may want to censor Wei_Dai over here, as he seems to be advocating that the FAI could hypothetically (and euphemistically) kill the entire population of earth:
My hypothetical scenario was that replacing a physical person with a software copy is a harmless operation and the FAI correctly comes to this conclusion. It doesn’t constitute hypothetically (or euphemistically) killing, since in the scenario, “virtualizing” doesn’t constitute “killing”.
An FAI would have some security advantages. It can achieve physical security by taking over the world and virtualizing everyone else
That is your exact wording. Not “In the event that the AGI determines that it’s safe to [euphemism for doing something that could mean killing the entire human race] because there are software copies.” or “if virtualizing is safe...”
Even if your wording was that, I’d still disagree with it.
I thought the most important reason to do friendliness research was to give the AGI what it needs to avoid making decisions that could kill all of humanity. It is humanity’s responsibility to dictate what should happen in this case and ensure that the AGI understands enough to choose the option we dictate. If you aren’t in favor of micromanaging the millions of tiny ethical decisions it will have to make like exactly how many months to put a lawbreaker in jail, that’s one thing. If you aren’t in favor of making sure it decides correctly on issues that could kill all of humanity, that’s negligent beyond imagining. If you are aware of a decision that an AGI could make that could kill all of humanity, and you are in favor of creating an AGI that hasn’t been given guidance on that issue, then you’re in favor of creating a very dangerous AGI.
Advocating for an AGI that will kill all of humanity vs. advocating for an AGI that could kill all of humanity is a variation on “advocating violence” (it’s advocating possible violence) but, to me, it’s no different from saying: “I’m going to put one bullet in my gun, aim at so-and-so, and pull the trigger!”—Just because the likelihood of killing so-and-so is reduced to 1 in 6 from what’s more or less a certainty does not mean it’s not a murder threat.
Likewise, adding the word “possibly” into a sentence that would otherwise break the censorship policy is a cheap way of trying to get through the filter. That should not work. “We should possibly go on a killing rampage.”—no.
What’s most alarming is that you’ve done work for SIAI.
The whole point of SIAI is not to go “Let’s let the AGI decide what is ethical” but “Let’s iron out all the ethical problems before making an AGI!”
If Eliezer doesn’t want to look bad, he should consider this.
As I clarified in a subsequent comment in that thread, “if the FAI concludes that replacing a physical person with a software copy isn’t a harmless operation, it could instead keep physical humans around and place them into virtual environments Matrix-style.”
We could argue about whether to build an FAI that can make this kind of decision on its own, but I had no intention of doing anyone any harm. Yes the attempted-FAI may reach this conclusion erroneously and end up killing everyone, but then any method of building an FAI has the possibility of something going wrong and everyone ending up dead.
What’s most alarming is that you’ve done work for SIAI.
I’ve never received any money from them and am not even a Research Associate. I have independently done work that may be useful for SIAI, but I don’t think that’s the same thing from a PR perspective.
The whole point of SIAI is not to go “Let’s let the AGI decide what is ethical” but “Let’s iron out all the ethical problems before making an AGI!”
Actually I think SIAI’s official position is something like “Let’s work out all the problems involved in letting the AGI decide what is ethical.” If you disagree with this, let’s argue about it, but could you please stop saying that I advocate killing people?
could you please stop saying that I advocate killing people?
reviews my wording very carefully
“If virtualizing people is violence … Wei_Dai … seems to be advocating ”
“Advocating for an AGI that will kill all of humanity (in the context of this is not what you said) vs. advocating for an AGI that could kill all of humanity (context: this is what you said)”
My understanding is that it’s your perspective that copying people and removing the physical original might not be killing them, so my statements reflect that but maybe it would make you feel better if I did this:
“If virtualizing people is violence … Wei_Dai … seems to be advocating … kill the entire population of earth (though he isn’t convinced that they would die)”
And likewise with the other statement.
Sorry for the upset that has probably caused. It wasn’t my intent to accuse you of actually wanting to kill everyone. I just disagree with you and am very concerned about how your statement looks to others with my perspective. More importantly, I feel concerned about the existential risk if people such as yourself (who are prominent here and connected with SIAI) are willing to have an AGI that could (in my view) potentially kill the entire human race. My feeling is not that you are violent or intend any harm, but that you appear to be confused in a way that I deem dangerous. Someone I’m close to holds a view similar to yours and although I find this disturbing, I accept him anyway. My disagreement with you is not personal, it’s not a judgment about your moral character, it’s an intellectual disagreement with your viewpoint.
As I clarified in a subsequent comment in that thread, “if the FAI concludes that replacing a physical person with a software copy isn’t a harmless operation, it could instead keep physical humans around and place them into virtual environments Matrix-style.”
I think the purpose of this part is to support your statement that you have no intention to harm anyone, but if it’s an argument against some specific part of my comment, would you mind matching them up because I don’t see how this refutes any of my points.
I’ve never received any money from them and am not even a Research Associate. I have independently done work that may be useful for SIAI, but I don’t think that’s the same thing from a PR perspective.
It’s not easy for me to determine your level of involvement from the website. This here suggests that you’ve done important work for SIAI:
Vladimir Nesov, a decision theory researcher, holds an M.S. in applied mathematics and physics from Moscow Institute of Physics and Technology. He has worked on Wei Dai’s updateless decision theory, in pursuit of one of the Singularity Institute’s core research goals: that of developing a “reflective decision theory.”
If one is informed of the exact relationship between you and SIAI, it is not as bad, but:
A. If someone very prominent on LessWrong (a top contributor) who has been contributing to SIAI’s decision theory ideas (independently) does something that looks bad, it still makes them look bad.
B. The PR effect for SIAI could be much worse considering that there are probably lots of people who read the site and see a connection there but do not know the specifics of the relationship.
“Let’s work out all the problems involved in letting the AGI decide what is ethical.”
Okay but how will you know it’s making the right decision if you do not even know what the right decision is for yourself? If you do not think it is safe to simply give the AGI an algorithm that looks good without testing to see whether running the algorithm outputs choices that we want it to make, then how do you test it? How do you even reason about the algorithm? How do you make those beliefs “pay rent”, as the sequence post puts it?
I see now that the statement could be interpreted in one of two ways:
“Let’s work out all the problems involved in letting the AGI define ethics.”
“Let’s work out all the problems involved in letting the AGI make decisions on it’s own without doing any of the things that are wrong by our definition of what’s ethical.”
Do you not think it better to determine for ourselves whether virtualizing everyone means killing them, and then ensure that the AGI makes the correct decision? Perhaps the reason you approach it this way is because you don’t think it’s possible for humans to determine whether virtualizing everyone is ethical?
I do think it is possible, so if you don’t think it is possible, let’s debate that.
Perhaps the reason you approach it this way is because you don’t think it’s possible for humans to determine whether virtualizing everyone is ethical?
I think it may not be possible for humans to determine this, in the time available before someone builds a UFAI or some other existential risk occurs. Still, I have been trying to determine this, for example just recently in Beware Selective Nihilism. Did you see that post?
Were you serious about having Eliezer censor my comment? If so, now that you have a better understanding of my ideas and relationship with SIAI, would you perhaps settle for me editing that comment with some additional clarifications?
Sorry for not responding sooner. The tab explosion triggered by the links in your article and related items was pretty big. I was trying to figure out how to deal with the large amount of information that was provided.
If you want to consider my take on it uniformed, fine. I haven’t read all of the relevant information in the tab explosion (this would take a long time). Here is my take and my opinion on the situation:
If a person is copied, the physical original will not experience what the copy experiences. Therefore, if you remove the physical original, the physical original’s experiences will end. This isn’t perfectly comparable to death seeing as how the person’s experiences, personality, knowledge, and interaction with the world will continue. However, the physical original’s experiences will end. That, for many, would be an unacceptable result of being virtualized.
I believe in the right to die, so regardless of whether I think being virtualized should be called “death”, I believe that people have the right to choose to do it to themselves. I do not believe that an AGI has the right to make that decision for them. To decide to end someone else’s experiences without first gaining their consent qualifies as violence to me and it is alarming to see someone as prominent as you advocating this.
My opinion is that it’s better for PR for you to edit your comment. Even if, for some reason, reading the entire tab explosion would somehow reveal to me that yes, the physical original would experience what the copy experiences even after being destroyed, I think it is likely that people who have not read all of that information will interpret it the way that I did and may become alarmed especially after realizing that it was you who wrote this.
I would be really happy to see you edit your own “virtualize everyone” comments. I do think something needs to be done. My suggestion would be to either:
A. Clearly state that you believe the physical original will experience the copy’s experiences even after being removed if that’s your view.
B. In the event that you agree that the physical original’s experiences would end, to refrain from talking about virtualizing everyone without their consent.
I added a disclaimer to my comment. I had to write my own since neither of yours correctly describes my current beliefs. I’ll also try to remember to be a bit more careful about my FAI-related comments in the future, and keep in mind that not all the readers will be familiar with my other writings.
If virtualizing people is violence (since it does imply copying their brains and, uh, removing the physical original) you may want to censor Wei_Dai over here, as he seems to be advocating that the FAI could hypothetically (and euphemistically) kill the entire population of earth:
Wei Dai’s Ironic Security Idea
My hypothetical scenario was that replacing a physical person with a software copy is a harmless operation and the FAI correctly comes to this conclusion. It doesn’t constitute hypothetically (or euphemistically) killing, since in the scenario, “virtualizing” doesn’t constitute “killing”.
That is your exact wording. Not “In the event that the AGI determines that it’s safe to [euphemism for doing something that could mean killing the entire human race] because there are software copies.” or “if virtualizing is safe...”
Even if your wording was that, I’d still disagree with it.
I thought the most important reason to do friendliness research was to give the AGI what it needs to avoid making decisions that could kill all of humanity. It is humanity’s responsibility to dictate what should happen in this case and ensure that the AGI understands enough to choose the option we dictate. If you aren’t in favor of micromanaging the millions of tiny ethical decisions it will have to make like exactly how many months to put a lawbreaker in jail, that’s one thing. If you aren’t in favor of making sure it decides correctly on issues that could kill all of humanity, that’s negligent beyond imagining. If you are aware of a decision that an AGI could make that could kill all of humanity, and you are in favor of creating an AGI that hasn’t been given guidance on that issue, then you’re in favor of creating a very dangerous AGI.
Advocating for an AGI that will kill all of humanity vs. advocating for an AGI that could kill all of humanity is a variation on “advocating violence” (it’s advocating possible violence) but, to me, it’s no different from saying: “I’m going to put one bullet in my gun, aim at so-and-so, and pull the trigger!”—Just because the likelihood of killing so-and-so is reduced to 1 in 6 from what’s more or less a certainty does not mean it’s not a murder threat.
Likewise, adding the word “possibly” into a sentence that would otherwise break the censorship policy is a cheap way of trying to get through the filter. That should not work. “We should possibly go on a killing rampage.”—no.
What’s most alarming is that you’ve done work for SIAI.
The whole point of SIAI is not to go “Let’s let the AGI decide what is ethical” but “Let’s iron out all the ethical problems before making an AGI!”
If Eliezer doesn’t want to look bad, he should consider this.
As I clarified in a subsequent comment in that thread, “if the FAI concludes that replacing a physical person with a software copy isn’t a harmless operation, it could instead keep physical humans around and place them into virtual environments Matrix-style.”
We could argue about whether to build an FAI that can make this kind of decision on its own, but I had no intention of doing anyone any harm. Yes the attempted-FAI may reach this conclusion erroneously and end up killing everyone, but then any method of building an FAI has the possibility of something going wrong and everyone ending up dead.
I’ve never received any money from them and am not even a Research Associate. I have independently done work that may be useful for SIAI, but I don’t think that’s the same thing from a PR perspective.
Actually I think SIAI’s official position is something like “Let’s work out all the problems involved in letting the AGI decide what is ethical.” If you disagree with this, let’s argue about it, but could you please stop saying that I advocate killing people?
I know.
reviews my wording very carefully
“If virtualizing people is violence … Wei_Dai … seems to be advocating ”
“Advocating for an AGI that will kill all of humanity (in the context of this is not what you said) vs. advocating for an AGI that could kill all of humanity (context: this is what you said)”
My understanding is that it’s your perspective that copying people and removing the physical original might not be killing them, so my statements reflect that but maybe it would make you feel better if I did this:
“If virtualizing people is violence … Wei_Dai … seems to be advocating … kill the entire population of earth (though he isn’t convinced that they would die)”
And likewise with the other statement.
Sorry for the upset that has probably caused. It wasn’t my intent to accuse you of actually wanting to kill everyone. I just disagree with you and am very concerned about how your statement looks to others with my perspective. More importantly, I feel concerned about the existential risk if people such as yourself (who are prominent here and connected with SIAI) are willing to have an AGI that could (in my view) potentially kill the entire human race. My feeling is not that you are violent or intend any harm, but that you appear to be confused in a way that I deem dangerous. Someone I’m close to holds a view similar to yours and although I find this disturbing, I accept him anyway. My disagreement with you is not personal, it’s not a judgment about your moral character, it’s an intellectual disagreement with your viewpoint.
I think the purpose of this part is to support your statement that you have no intention to harm anyone, but if it’s an argument against some specific part of my comment, would you mind matching them up because I don’t see how this refutes any of my points.
It’s not easy for me to determine your level of involvement from the website. This here suggests that you’ve done important work for SIAI:
http://singularity.org/blog/2011/07/22/announcing-the-research-associates-program/
If one is informed of the exact relationship between you and SIAI, it is not as bad, but:
A. If someone very prominent on LessWrong (a top contributor) who has been contributing to SIAI’s decision theory ideas (independently) does something that looks bad, it still makes them look bad.
B. The PR effect for SIAI could be much worse considering that there are probably lots of people who read the site and see a connection there but do not know the specifics of the relationship.
Okay but how will you know it’s making the right decision if you do not even know what the right decision is for yourself? If you do not think it is safe to simply give the AGI an algorithm that looks good without testing to see whether running the algorithm outputs choices that we want it to make, then how do you test it? How do you even reason about the algorithm? How do you make those beliefs “pay rent”, as the sequence post puts it?
I see now that the statement could be interpreted in one of two ways:
“Let’s work out all the problems involved in letting the AGI define ethics.”
“Let’s work out all the problems involved in letting the AGI make decisions on it’s own without doing any of the things that are wrong by our definition of what’s ethical.”
Do you not think it better to determine for ourselves whether virtualizing everyone means killing them, and then ensure that the AGI makes the correct decision? Perhaps the reason you approach it this way is because you don’t think it’s possible for humans to determine whether virtualizing everyone is ethical?
I do think it is possible, so if you don’t think it is possible, let’s debate that.
I think it may not be possible for humans to determine this, in the time available before someone builds a UFAI or some other existential risk occurs. Still, I have been trying to determine this, for example just recently in Beware Selective Nihilism. Did you see that post?
Were you serious about having Eliezer censor my comment? If so, now that you have a better understanding of my ideas and relationship with SIAI, would you perhaps settle for me editing that comment with some additional clarifications?
Sorry for not responding sooner. The tab explosion triggered by the links in your article and related items was pretty big. I was trying to figure out how to deal with the large amount of information that was provided.
If you want to consider my take on it uniformed, fine. I haven’t read all of the relevant information in the tab explosion (this would take a long time). Here is my take and my opinion on the situation:
If a person is copied, the physical original will not experience what the copy experiences. Therefore, if you remove the physical original, the physical original’s experiences will end. This isn’t perfectly comparable to death seeing as how the person’s experiences, personality, knowledge, and interaction with the world will continue. However, the physical original’s experiences will end. That, for many, would be an unacceptable result of being virtualized.
I believe in the right to die, so regardless of whether I think being virtualized should be called “death”, I believe that people have the right to choose to do it to themselves. I do not believe that an AGI has the right to make that decision for them. To decide to end someone else’s experiences without first gaining their consent qualifies as violence to me and it is alarming to see someone as prominent as you advocating this.
My opinion is that it’s better for PR for you to edit your comment. Even if, for some reason, reading the entire tab explosion would somehow reveal to me that yes, the physical original would experience what the copy experiences even after being destroyed, I think it is likely that people who have not read all of that information will interpret it the way that I did and may become alarmed especially after realizing that it was you who wrote this.
I would be really happy to see you edit your own “virtualize everyone” comments. I do think something needs to be done. My suggestion would be to either:
A. Clearly state that you believe the physical original will experience the copy’s experiences even after being removed if that’s your view.
B. In the event that you agree that the physical original’s experiences would end, to refrain from talking about virtualizing everyone without their consent.
I added a disclaimer to my comment. I had to write my own since neither of yours correctly describes my current beliefs. I’ll also try to remember to be a bit more careful about my FAI-related comments in the future, and keep in mind that not all the readers will be familiar with my other writings.
Thanks for listening to me. I feel better about this now.