I’m very uneasy as to how to properly discuss AI reasearch:
One can’t warn of the dangers of AI, without bragging of their power. Will the warning increase or decrease the probability of UAI?
One can’t advise responsible people not to attempt to make an AI, without increasing the risk that the first AI will be made by someone irresponsible. But what are the chances that an AI made with good intentions destroys humanity anyways?
AI research seems to correspond with a prisoner’s dilemma, so I wouldn’t expect cooperation.
I don’t know whether it is a better idea to oppose AI research or support it.
I think I can safely conclude:
Anyone who already shows an interest in AI should be warned of the dangers.
Advise using encryption at all levels when doing AI research, on the assumption that anyone who would steal AI research is more likely to be more dangerous (likely to make a mistake, or evil)
Support research into “what is good” in terms that might help programming
Support research into the obedience aspect of AI (either directly to the author, or to the author’s intended programming)
As to the involvement of government, I’m really nervous about that whether it is an individual nation or supposed cooperation.
AI research seems to correspond with a prisoner’s dilemma, so I wouldn’t expect cooperation.
Fortunately, many real-world scenarios are iterated prisoner’s dilemmas (e.g., moving ahead with your country’s AI research faster than what was agreed upon). We can also set up side payments against defection, such as by an international governing body. And changing people’s views about the payoffs (such as by encouraging an internationalist outlook) could make the game no longer a prisoner’s dilemma.
In general, this highlights the importance of improving theory of, institutions for, and inclinations toward compromise.
I’m very uneasy as to how to properly discuss AI reasearch:
One can’t warn of the dangers of AI, without bragging of their power. Will the warning increase or decrease the probability of UAI?
One can’t advise responsible people not to attempt to make an AI, without increasing the risk that the first AI will be made by someone irresponsible. But what are the chances that an AI made with good intentions destroys humanity anyways?
AI research seems to correspond with a prisoner’s dilemma, so I wouldn’t expect cooperation.
I don’t know whether it is a better idea to oppose AI research or support it.
I think I can safely conclude:
Anyone who already shows an interest in AI should be warned of the dangers.
Advise using encryption at all levels when doing AI research, on the assumption that anyone who would steal AI research is more likely to be more dangerous (likely to make a mistake, or evil)
Support research into “what is good” in terms that might help programming
Support research into the obedience aspect of AI (either directly to the author, or to the author’s intended programming)
As to the involvement of government, I’m really nervous about that whether it is an individual nation or supposed cooperation.
These are tricky issues. :)
Fortunately, many real-world scenarios are iterated prisoner’s dilemmas (e.g., moving ahead with your country’s AI research faster than what was agreed upon). We can also set up side payments against defection, such as by an international governing body. And changing people’s views about the payoffs (such as by encouraging an internationalist outlook) could make the game no longer a prisoner’s dilemma.
In general, this highlights the importance of improving theory of, institutions for, and inclinations toward compromise.