I think this article might benefit from some definitional clarity. I’m going to throw out some potential definitions I’m thinking of (although there may be others)
Basic Robotic Ethics: (Tactical Problem)
There exist difficult decisions of a defined scope that Humans have trouble making right now (Given this intelligence and these images, is the following person a valid military target?)
If technology continues to advance, in the future more of these decisions will be made by robots.
We need to decide how to program robots making these decisions in a way that we can accept.
There exist non human entities which are beyond our ability to easily make friendly by rewriting their code right now. (Example: Corporations, Governments.)
If technology continues to advance, in the future there will exist non human entities which are even more powerful and larger in scope run by artificial intelligences.
Barring advances in friendliness research, these will likely be even more difficult to recode.
There exists difficulty in passing down ethics and improving upon our ethics to the next generation right now.
If technology continues to advance, at some point, the Artificial Intelligence described in Basic Artificial Intelligence Ethics might be programming the robots described in Basic Robotic Ethics and improving on its own programming.
We need to ensure that this occurs in a manner that is friendly.
Now that I’ve defined what I’m thinking of, I can say that this article writer appears to discussing all of these different problems as if they are all practically the same thing, even though several of the very quotes he’s using seem to politely imply he’s making scope errors, and I’m guessing some of the people didn’t want to talk to him because they received the impression he didn’t understand enough basic information to get it.
I think this article might benefit from some definitional clarity. I’m going to throw out some potential definitions I’m thinking of (although there may be others)
Basic Robotic Ethics: (Tactical Problem)
There exist difficult decisions of a defined scope that Humans have trouble making right now (Given this intelligence and these images, is the following person a valid military target?)
If technology continues to advance, in the future more of these decisions will be made by robots.
We need to decide how to program robots making these decisions in a way that we can accept.
Basic Artificial Intelligence Ethics: (Strategic Problem)
There exist non human entities which are beyond our ability to easily make friendly by rewriting their code right now. (Example: Corporations, Governments.)
If technology continues to advance, in the future there will exist non human entities which are even more powerful and larger in scope run by artificial intelligences.
Barring advances in friendliness research, these will likely be even more difficult to recode.
Advanced Artificial Intelligence Ethics: (Generational Problem)
There exists difficulty in passing down ethics and improving upon our ethics to the next generation right now.
If technology continues to advance, at some point, the Artificial Intelligence described in Basic Artificial Intelligence Ethics might be programming the robots described in Basic Robotic Ethics and improving on its own programming.
We need to ensure that this occurs in a manner that is friendly.
Now that I’ve defined what I’m thinking of, I can say that this article writer appears to discussing all of these different problems as if they are all practically the same thing, even though several of the very quotes he’s using seem to politely imply he’s making scope errors, and I’m guessing some of the people didn’t want to talk to him because they received the impression he didn’t understand enough basic information to get it.