You may also wish to consider including some of the questions in my and Tom McCabe’s Singularity FAQ. I was never very happy with it and don’t generally endorse it in its current state. We managed to gather a large variety of AI/FAI/Singularity-related questions from various people, but never really managed to write very good answers. But you might find some of the questions useful.
Questions from some FAI-relevant sections. IIRC, these are all questions that somebody has actually asked, at one point or another.
Q1). Couldn’t AIs be built as pure advisors, so they wouldn’t do anything themselves? Q2). Wouldn’t a human upload naturally be more Friendly than any AI? Q3). Trying to create a theory which absolutely guarantees Friendly AI is an unrealistic, extremely difficult goal, so isn’t it a better idea to attempt to create a theory of “probably Friendly AI”? Q4). Shouldn’t we work on building a transparent society, where no illicit AI development can be carried out?
Q1). Wouldn’t an AI that’s forced to be Friendly be prevented from evolving and growing? Q2). Didn’t Shane Legg prove that we can’t predict the behavior of intelligences smarter than us? Q3). Since a superintelligence could rewrite itself to remove human tampering, isn’t Friendly AI impossible? Q4). Why would a super-intelligent AI have any reason to care about humans, who would be stupid by comparison? Q5). What if the AI misinterprets its goals? Q6). Isn’t it impossible to simulate a person’s development without creating, essentially, a copy of that person? Q7). Isn’t it impossible to know a person’s subjective desires and feelings from outside? Q8). Couldn’t a machine never understand human morality, or human emotions? Q9). What if AIs take advantage of their power, and create a dictatorship of the machines? Q10). If we don’t build a self-preservation instinct into the AI, wouldn’t it just find no reason to continue existing, and commit suicide? Q11). What if superintelligent AIs reason that it’s best for humanity to destroy itself? Q12). The main defining characteristic of complex systems, such as minds, is that no mathematical verification of properties such as “Friendliness” is possible; hence, even if Friendliness is possible in theory, isn’t it impossible to implement? Q13). Any future AI would undergo natural selection, so wouldn’t it eventually become hostile to humanity to better pursue reproductive fitness? Q14). Shouldn’t FAI be done as an open-source effort, so other people can see that the project isn’t being hijacked to make some guy Supreme Emperor of the Universe? Q15). If an FAI does what we would want if we were less selfish, won’t it kill us all in the process of extracting resources to colonize space as quickly as possible to prevent astronomical waste? Q16). What if ethics are subjective, not objective? Then, no truly Friendly AI could be built. Q18). Isn’t the idea of a hostile AI anthropomorphic? Q19). Isn’t the idea of “Friendliness”, as we understand it now, too vaguely defined? Q20). Why don’t mainstream researchers consider Friendliness an issue? Q21). How could an AI build a computer model of human morality, when human morals contradict each other, even within individuals? Q22). Aren’t most humans are rotten bastards? Basing an FAI morality off of human morality is a bad idea anyway. Q23). If an AI is programmed to make us happy, the best way to make us happy would be to constantly stimulate our pleasure centers, so wouldn’t it turn us into nothing but experiencers of constant orgasms? Q24). What if an AI decides to force us to do what it thinks is best for us, or what will make us the happiest, even if we don’t like it?
Q1). If AI is a serious threat, then wouldn’t the American government or some other official agency step in and take action, for fear of endangering national security? Q2). Won’t the US government, Google, or some other large organization with billions of dollars and thousands of employees, be the first ones to develop strong AI? Q3). What if the Singularity Institute and its supporters is just another “doomsday cult”, like the religious extremists who talk about how “the end is nigh”? Q4). Shouldn’t all of humanity have input on how a Friendly AI should be designed, instead of just a few programmers or scientists? Q5). Has the Singularity Institute done research and published papers, like other research groups and academic institutions?
Q1). What if humans don’t accept being ruled by machines? Q2). How do we make sure that an AI doesn’t just end up being a tool of whichever group built it, or controls it? Q3). Aren’t power-hungry organizations going to race to AI technology, and use it to dominate the world, before there’s time to create truly Friendly AI? Q4). What if an FAI only helps the rich, the First World, uploaded humans, or some other privileged class of elites? Q5). Since hundreds of thousands of people are dying every day, don’t need AI too urgently to let our research efforts be delayed by having to guarantee Friendliness?
You may also wish to consider including some of the questions in my and Tom McCabe’s Singularity FAQ. I was never very happy with it and don’t generally endorse it in its current state. We managed to gather a large variety of AI/FAI/Singularity-related questions from various people, but never really managed to write very good answers. But you might find some of the questions useful.
Questions from some FAI-relevant sections. IIRC, these are all questions that somebody has actually asked, at one point or another.
Alternatives to Friendly AI
Q1). Couldn’t AIs be built as pure advisors, so they wouldn’t do anything themselves?
Q2). Wouldn’t a human upload naturally be more Friendly than any AI?
Q3). Trying to create a theory which absolutely guarantees Friendly AI is an unrealistic, extremely difficult goal, so isn’t it a better idea to attempt to create a theory of “probably Friendly AI”?
Q4). Shouldn’t we work on building a transparent society, where no illicit AI development can be carried out?
Implementation of Friendly AI
Q1). Wouldn’t an AI that’s forced to be Friendly be prevented from evolving and growing?
Q2). Didn’t Shane Legg prove that we can’t predict the behavior of intelligences smarter than us?
Q3). Since a superintelligence could rewrite itself to remove human tampering, isn’t Friendly AI impossible?
Q4). Why would a super-intelligent AI have any reason to care about humans, who would be stupid by comparison?
Q5). What if the AI misinterprets its goals?
Q6). Isn’t it impossible to simulate a person’s development without creating, essentially, a copy of that person?
Q7). Isn’t it impossible to know a person’s subjective desires and feelings from outside?
Q8). Couldn’t a machine never understand human morality, or human emotions?
Q9). What if AIs take advantage of their power, and create a dictatorship of the machines?
Q10). If we don’t build a self-preservation instinct into the AI, wouldn’t it just find no reason to continue existing, and commit suicide?
Q11). What if superintelligent AIs reason that it’s best for humanity to destroy itself?
Q12). The main defining characteristic of complex systems, such as minds, is that no mathematical verification of properties such as “Friendliness” is possible; hence, even if Friendliness is possible in theory, isn’t it impossible to implement?
Q13). Any future AI would undergo natural selection, so wouldn’t it eventually become hostile to humanity to better pursue reproductive fitness?
Q14). Shouldn’t FAI be done as an open-source effort, so other people can see that the project isn’t being hijacked to make some guy Supreme Emperor of the Universe?
Q15). If an FAI does what we would want if we were less selfish, won’t it kill us all in the process of extracting resources to colonize space as quickly as possible to prevent astronomical waste?
Q16). What if ethics are subjective, not objective? Then, no truly Friendly AI could be built.
Q18). Isn’t the idea of a hostile AI anthropomorphic?
Q19). Isn’t the idea of “Friendliness”, as we understand it now, too vaguely defined?
Q20). Why don’t mainstream researchers consider Friendliness an issue?
Q21). How could an AI build a computer model of human morality, when human morals contradict each other, even within individuals?
Q22). Aren’t most humans are rotten bastards? Basing an FAI morality off of human morality is a bad idea anyway.
Q23). If an AI is programmed to make us happy, the best way to make us happy would be to constantly stimulate our pleasure centers, so wouldn’t it turn us into nothing but experiencers of constant orgasms?
Q24). What if an AI decides to force us to do what it thinks is best for us, or what will make us the happiest, even if we don’t like it?
General Questions
Q1). If AI is a serious threat, then wouldn’t the American government or some other official agency step in and take action, for fear of endangering national security?
Q2). Won’t the US government, Google, or some other large organization with billions of dollars and thousands of employees, be the first ones to develop strong AI?
Q3). What if the Singularity Institute and its supporters is just another “doomsday cult”, like the religious extremists who talk about how “the end is nigh”?
Q4). Shouldn’t all of humanity have input on how a Friendly AI should be designed, instead of just a few programmers or scientists?
Q5). Has the Singularity Institute done research and published papers, like other research groups and academic institutions?
Societal issues
Q1). What if humans don’t accept being ruled by machines?
Q2). How do we make sure that an AI doesn’t just end up being a tool of whichever group built it, or controls it?
Q3). Aren’t power-hungry organizations going to race to AI technology, and use it to dominate the world, before there’s time to create truly Friendly AI?
Q4). What if an FAI only helps the rich, the First World, uploaded humans, or some other privileged class of elites?
Q5). Since hundreds of thousands of people are dying every day, don’t need AI too urgently to let our research efforts be delayed by having to guarantee Friendliness?