Thanks for the nice interview Alexander. I’m Eray Ozkural by the way, if you have any further questions, I would love to answer them.
I actually think that SIAI is serving a useful purpose when they are highlighting the importance of ethics in AI research and computer science in general.
Most engineers, whether because we are slightly autistic or not I do not know, have no or very little interest in ethical consequences of their work. I have met many good engineers who work for military firms (firms that thrive on easy money from the military), and not once have they raised a point about ethics. Neither do data mining or information retrieval researchers seem to have such qualms (except when they are pretending to, at academic conferences). At companies like facebook, they think they have a right to exploit the data they have collected and use it for all sorts of commercial and police state purposes. Likewise in AI and robotics, I see people cheering whenever military drones or robots are mentioned, as if automatization of warfare is civil or better in some sense because it is higher technology.
I think that at least AGI researchers must understand that they must have no dealing with the military and the government, and by doing so, they may be putting themselves and all of us at risk. Maybe fear tactics will work, I don’t know.
On the other hand, I don’t think that “friendly” AI is such a big concern, for reasons I mention above, artificial persons simply aren’t needed. I have heard the argument that “but someone will build it sooner or later”, though there is no reason that person is going to listen to you. The way I see it, it’s better to focus on technology right now, so we can have a better sense of applications first. People seem to think that we should equip robots with fully autonomous AGI. Why is that? People have mentioned to me robotic bartenders, robotic geishas, fire rescuers, and cleaners. Well, is that serious? Do you really want a bartender that can solve general relativity problems while cleaning glasses? It’s just nonsense. Or does a fire rescuer really need to think about whether it wants to go on to exterminate the human race after extinguishing the fire? The simple answer is that the people who give those examples are not focusing on the engineering requirements of the applications they have in mind. Another example: military robots. They think that military robots must have a sense of morality. I ask you, why is it important to have moral individuals in an enterprise that is fundamentally immoral? All war is murder, and I suggest you to stay away from professional murder business. That is, if you have any true sense of morality.
Instead of “friendly”, a sense of “benevolence” may instead be thought, and that might make sense from an ethical theory viewpoint. It is possible to formalize some theories of ethics and implement them on an autonomous AI, however, for all the capabilities that autonomous trans-sapient AI’s may possess, I think it is not a good idea to let such machines develop into distinctive personalities of their own, or meddle in human affairs. I think there are already too many people on earth, I don’t think we need artificial persons. We might need robots, we might need AI’s, but not artificial persons, or AI’s that will decide instead of us. I prefer that as humans we remain at the helm. That I say with respect to some totalitarian sounding proposals like CEV. In general, I do not think that we need to replace critical decision making with AI’s. Give AI’s to us scientists and engineers and that shall be enough. For the rest, like replacing corrupt and ineffective politicians, a broken economic system, social injustice, etc., we need human solutions, because ultimately we must replace some sub-standard human models with harmful motivations like greed and superstitious ideas, with better human models that have the intellectual capacity to understand the human condition, science, philosophy, etc., regardless of any progress in AI. :) In the end, there is a pandemic of stupidity and ignorance that we must cure for those social problems, and I doubt we can cure it with an AI vaccine.
People seem to think that we should equip robots with fully autonomous AGI. Why is that? People have mentioned to me robotic bartenders, robotic geishas, fire rescuers, and cleaners. Well, is that serious? Do you really want a bartender that can solve general relativity problems while cleaning glasses? It’s just nonsense.
You can’t have a human in the loop all the time—it’s too slow. So: many machines in the future will be at least semi-autonomous—as many of them are today.
Probably a somewhat more interesting question is whether machines will be given rights as “people”. It’s a complex political question, but I expect that eventually they will. Thus things like The Campaign for Robot Rights . The era of machine slavery will someday be looked back on as a kind of moral dark ages—much as the era of human slavery is looked back on today.
I prefer that as humans we remain at the helm.
Right—but what is a human? No doubt the first mammals also wished to “remain at the helm”. In a sense they did—though many of their modern descendants don’t look much like the mouse-like creatures they all descended from. It seems likely to be much the same with us.
On the other hand, I don’t think that “friendly” AI is such a big concern, for reasons I mention above, artificial persons simply aren’t needed.
Thanks for the nice interview Alexander. I’m Eray Ozkural by the way, if you have any further questions, I would love to answer them.
I actually think that SIAI is serving a useful purpose when they are highlighting the importance of ethics in AI research and computer science in general.
Most engineers, whether because we are slightly autistic or not I do not know, have no or very little interest in ethical consequences of their work. I have met many good engineers who work for military firms (firms that thrive on easy money from the military), and not once have they raised a point about ethics. Neither do data mining or information retrieval researchers seem to have such qualms (except when they are pretending to, at academic conferences). At companies like facebook, they think they have a right to exploit the data they have collected and use it for all sorts of commercial and police state purposes. Likewise in AI and robotics, I see people cheering whenever military drones or robots are mentioned, as if automatization of warfare is civil or better in some sense because it is higher technology.
I think that at least AGI researchers must understand that they must have no dealing with the military and the government, and by doing so, they may be putting themselves and all of us at risk. Maybe fear tactics will work, I don’t know.
On the other hand, I don’t think that “friendly” AI is such a big concern, for reasons I mention above, artificial persons simply aren’t needed. I have heard the argument that “but someone will build it sooner or later”, though there is no reason that person is going to listen to you. The way I see it, it’s better to focus on technology right now, so we can have a better sense of applications first. People seem to think that we should equip robots with fully autonomous AGI. Why is that? People have mentioned to me robotic bartenders, robotic geishas, fire rescuers, and cleaners. Well, is that serious? Do you really want a bartender that can solve general relativity problems while cleaning glasses? It’s just nonsense. Or does a fire rescuer really need to think about whether it wants to go on to exterminate the human race after extinguishing the fire? The simple answer is that the people who give those examples are not focusing on the engineering requirements of the applications they have in mind. Another example: military robots. They think that military robots must have a sense of morality. I ask you, why is it important to have moral individuals in an enterprise that is fundamentally immoral? All war is murder, and I suggest you to stay away from professional murder business. That is, if you have any true sense of morality.
Instead of “friendly”, a sense of “benevolence” may instead be thought, and that might make sense from an ethical theory viewpoint. It is possible to formalize some theories of ethics and implement them on an autonomous AI, however, for all the capabilities that autonomous trans-sapient AI’s may possess, I think it is not a good idea to let such machines develop into distinctive personalities of their own, or meddle in human affairs. I think there are already too many people on earth, I don’t think we need artificial persons. We might need robots, we might need AI’s, but not artificial persons, or AI’s that will decide instead of us. I prefer that as humans we remain at the helm. That I say with respect to some totalitarian sounding proposals like CEV. In general, I do not think that we need to replace critical decision making with AI’s. Give AI’s to us scientists and engineers and that shall be enough. For the rest, like replacing corrupt and ineffective politicians, a broken economic system, social injustice, etc., we need human solutions, because ultimately we must replace some sub-standard human models with harmful motivations like greed and superstitious ideas, with better human models that have the intellectual capacity to understand the human condition, science, philosophy, etc., regardless of any progress in AI. :) In the end, there is a pandemic of stupidity and ignorance that we must cure for those social problems, and I doubt we can cure it with an AI vaccine.
You can’t have a human in the loop all the time—it’s too slow. So: many machines in the future will be at least semi-autonomous—as many of them are today.
Probably a somewhat more interesting question is whether machines will be given rights as “people”. It’s a complex political question, but I expect that eventually they will. Thus things like The Campaign for Robot Rights . The era of machine slavery will someday be looked back on as a kind of moral dark ages—much as the era of human slavery is looked back on today.
Right—but what is a human? No doubt the first mammals also wished to “remain at the helm”. In a sense they did—though many of their modern descendants don’t look much like the mouse-like creatures they all descended from. It seems likely to be much the same with us.
That isn’t the SIAI proposal, FWIW. See: http://lesswrong.com/lw/x5/nonsentient_optimizers/