could you please stop saying that I advocate killing people?
reviews my wording very carefully
“If virtualizing people is violence … Wei_Dai … seems to be advocating ”
“Advocating for an AGI that will kill all of humanity (in the context of this is not what you said) vs. advocating for an AGI that could kill all of humanity (context: this is what you said)”
My understanding is that it’s your perspective that copying people and removing the physical original might not be killing them, so my statements reflect that but maybe it would make you feel better if I did this:
“If virtualizing people is violence … Wei_Dai … seems to be advocating … kill the entire population of earth (though he isn’t convinced that they would die)”
And likewise with the other statement.
Sorry for the upset that has probably caused. It wasn’t my intent to accuse you of actually wanting to kill everyone. I just disagree with you and am very concerned about how your statement looks to others with my perspective. More importantly, I feel concerned about the existential risk if people such as yourself (who are prominent here and connected with SIAI) are willing to have an AGI that could (in my view) potentially kill the entire human race. My feeling is not that you are violent or intend any harm, but that you appear to be confused in a way that I deem dangerous. Someone I’m close to holds a view similar to yours and although I find this disturbing, I accept him anyway. My disagreement with you is not personal, it’s not a judgment about your moral character, it’s an intellectual disagreement with your viewpoint.
As I clarified in a subsequent comment in that thread, “if the FAI concludes that replacing a physical person with a software copy isn’t a harmless operation, it could instead keep physical humans around and place them into virtual environments Matrix-style.”
I think the purpose of this part is to support your statement that you have no intention to harm anyone, but if it’s an argument against some specific part of my comment, would you mind matching them up because I don’t see how this refutes any of my points.
I’ve never received any money from them and am not even a Research Associate. I have independently done work that may be useful for SIAI, but I don’t think that’s the same thing from a PR perspective.
It’s not easy for me to determine your level of involvement from the website. This here suggests that you’ve done important work for SIAI:
Vladimir Nesov, a decision theory researcher, holds an M.S. in applied mathematics and physics from Moscow Institute of Physics and Technology. He has worked on Wei Dai’s updateless decision theory, in pursuit of one of the Singularity Institute’s core research goals: that of developing a “reflective decision theory.”
If one is informed of the exact relationship between you and SIAI, it is not as bad, but:
A. If someone very prominent on LessWrong (a top contributor) who has been contributing to SIAI’s decision theory ideas (independently) does something that looks bad, it still makes them look bad.
B. The PR effect for SIAI could be much worse considering that there are probably lots of people who read the site and see a connection there but do not know the specifics of the relationship.
“Let’s work out all the problems involved in letting the AGI decide what is ethical.”
Okay but how will you know it’s making the right decision if you do not even know what the right decision is for yourself? If you do not think it is safe to simply give the AGI an algorithm that looks good without testing to see whether running the algorithm outputs choices that we want it to make, then how do you test it? How do you even reason about the algorithm? How do you make those beliefs “pay rent”, as the sequence post puts it?
I see now that the statement could be interpreted in one of two ways:
“Let’s work out all the problems involved in letting the AGI define ethics.”
“Let’s work out all the problems involved in letting the AGI make decisions on it’s own without doing any of the things that are wrong by our definition of what’s ethical.”
Do you not think it better to determine for ourselves whether virtualizing everyone means killing them, and then ensure that the AGI makes the correct decision? Perhaps the reason you approach it this way is because you don’t think it’s possible for humans to determine whether virtualizing everyone is ethical?
I do think it is possible, so if you don’t think it is possible, let’s debate that.
Perhaps the reason you approach it this way is because you don’t think it’s possible for humans to determine whether virtualizing everyone is ethical?
I think it may not be possible for humans to determine this, in the time available before someone builds a UFAI or some other existential risk occurs. Still, I have been trying to determine this, for example just recently in Beware Selective Nihilism. Did you see that post?
Were you serious about having Eliezer censor my comment? If so, now that you have a better understanding of my ideas and relationship with SIAI, would you perhaps settle for me editing that comment with some additional clarifications?
Sorry for not responding sooner. The tab explosion triggered by the links in your article and related items was pretty big. I was trying to figure out how to deal with the large amount of information that was provided.
If you want to consider my take on it uniformed, fine. I haven’t read all of the relevant information in the tab explosion (this would take a long time). Here is my take and my opinion on the situation:
If a person is copied, the physical original will not experience what the copy experiences. Therefore, if you remove the physical original, the physical original’s experiences will end. This isn’t perfectly comparable to death seeing as how the person’s experiences, personality, knowledge, and interaction with the world will continue. However, the physical original’s experiences will end. That, for many, would be an unacceptable result of being virtualized.
I believe in the right to die, so regardless of whether I think being virtualized should be called “death”, I believe that people have the right to choose to do it to themselves. I do not believe that an AGI has the right to make that decision for them. To decide to end someone else’s experiences without first gaining their consent qualifies as violence to me and it is alarming to see someone as prominent as you advocating this.
My opinion is that it’s better for PR for you to edit your comment. Even if, for some reason, reading the entire tab explosion would somehow reveal to me that yes, the physical original would experience what the copy experiences even after being destroyed, I think it is likely that people who have not read all of that information will interpret it the way that I did and may become alarmed especially after realizing that it was you who wrote this.
I would be really happy to see you edit your own “virtualize everyone” comments. I do think something needs to be done. My suggestion would be to either:
A. Clearly state that you believe the physical original will experience the copy’s experiences even after being removed if that’s your view.
B. In the event that you agree that the physical original’s experiences would end, to refrain from talking about virtualizing everyone without their consent.
I added a disclaimer to my comment. I had to write my own since neither of yours correctly describes my current beliefs. I’ll also try to remember to be a bit more careful about my FAI-related comments in the future, and keep in mind that not all the readers will be familiar with my other writings.
I know.
reviews my wording very carefully
“If virtualizing people is violence … Wei_Dai … seems to be advocating ”
“Advocating for an AGI that will kill all of humanity (in the context of this is not what you said) vs. advocating for an AGI that could kill all of humanity (context: this is what you said)”
My understanding is that it’s your perspective that copying people and removing the physical original might not be killing them, so my statements reflect that but maybe it would make you feel better if I did this:
“If virtualizing people is violence … Wei_Dai … seems to be advocating … kill the entire population of earth (though he isn’t convinced that they would die)”
And likewise with the other statement.
Sorry for the upset that has probably caused. It wasn’t my intent to accuse you of actually wanting to kill everyone. I just disagree with you and am very concerned about how your statement looks to others with my perspective. More importantly, I feel concerned about the existential risk if people such as yourself (who are prominent here and connected with SIAI) are willing to have an AGI that could (in my view) potentially kill the entire human race. My feeling is not that you are violent or intend any harm, but that you appear to be confused in a way that I deem dangerous. Someone I’m close to holds a view similar to yours and although I find this disturbing, I accept him anyway. My disagreement with you is not personal, it’s not a judgment about your moral character, it’s an intellectual disagreement with your viewpoint.
I think the purpose of this part is to support your statement that you have no intention to harm anyone, but if it’s an argument against some specific part of my comment, would you mind matching them up because I don’t see how this refutes any of my points.
It’s not easy for me to determine your level of involvement from the website. This here suggests that you’ve done important work for SIAI:
http://singularity.org/blog/2011/07/22/announcing-the-research-associates-program/
If one is informed of the exact relationship between you and SIAI, it is not as bad, but:
A. If someone very prominent on LessWrong (a top contributor) who has been contributing to SIAI’s decision theory ideas (independently) does something that looks bad, it still makes them look bad.
B. The PR effect for SIAI could be much worse considering that there are probably lots of people who read the site and see a connection there but do not know the specifics of the relationship.
Okay but how will you know it’s making the right decision if you do not even know what the right decision is for yourself? If you do not think it is safe to simply give the AGI an algorithm that looks good without testing to see whether running the algorithm outputs choices that we want it to make, then how do you test it? How do you even reason about the algorithm? How do you make those beliefs “pay rent”, as the sequence post puts it?
I see now that the statement could be interpreted in one of two ways:
“Let’s work out all the problems involved in letting the AGI define ethics.”
“Let’s work out all the problems involved in letting the AGI make decisions on it’s own without doing any of the things that are wrong by our definition of what’s ethical.”
Do you not think it better to determine for ourselves whether virtualizing everyone means killing them, and then ensure that the AGI makes the correct decision? Perhaps the reason you approach it this way is because you don’t think it’s possible for humans to determine whether virtualizing everyone is ethical?
I do think it is possible, so if you don’t think it is possible, let’s debate that.
I think it may not be possible for humans to determine this, in the time available before someone builds a UFAI or some other existential risk occurs. Still, I have been trying to determine this, for example just recently in Beware Selective Nihilism. Did you see that post?
Were you serious about having Eliezer censor my comment? If so, now that you have a better understanding of my ideas and relationship with SIAI, would you perhaps settle for me editing that comment with some additional clarifications?
Sorry for not responding sooner. The tab explosion triggered by the links in your article and related items was pretty big. I was trying to figure out how to deal with the large amount of information that was provided.
If you want to consider my take on it uniformed, fine. I haven’t read all of the relevant information in the tab explosion (this would take a long time). Here is my take and my opinion on the situation:
If a person is copied, the physical original will not experience what the copy experiences. Therefore, if you remove the physical original, the physical original’s experiences will end. This isn’t perfectly comparable to death seeing as how the person’s experiences, personality, knowledge, and interaction with the world will continue. However, the physical original’s experiences will end. That, for many, would be an unacceptable result of being virtualized.
I believe in the right to die, so regardless of whether I think being virtualized should be called “death”, I believe that people have the right to choose to do it to themselves. I do not believe that an AGI has the right to make that decision for them. To decide to end someone else’s experiences without first gaining their consent qualifies as violence to me and it is alarming to see someone as prominent as you advocating this.
My opinion is that it’s better for PR for you to edit your comment. Even if, for some reason, reading the entire tab explosion would somehow reveal to me that yes, the physical original would experience what the copy experiences even after being destroyed, I think it is likely that people who have not read all of that information will interpret it the way that I did and may become alarmed especially after realizing that it was you who wrote this.
I would be really happy to see you edit your own “virtualize everyone” comments. I do think something needs to be done. My suggestion would be to either:
A. Clearly state that you believe the physical original will experience the copy’s experiences even after being removed if that’s your view.
B. In the event that you agree that the physical original’s experiences would end, to refrain from talking about virtualizing everyone without their consent.
I added a disclaimer to my comment. I had to write my own since neither of yours correctly describes my current beliefs. I’ll also try to remember to be a bit more careful about my FAI-related comments in the future, and keep in mind that not all the readers will be familiar with my other writings.
Thanks for listening to me. I feel better about this now.