I’ve begun work (with a few others) on a somewhat comprehensive Friendly AI F.A.Q. The answers will be much longer and more detailed than in the Singularity FAQ. I’d appreciate feedback on which questions should be added.
1. Friendly AI: History and Concepts
1. What is Friendly AI?
2. What is the Singularity? [w/ explanation of all three types]
3. What is the history of the Friendly AI Concept?
4. What is nanotechnology?
5. What is biological cognitive enhancement?
6. What are brain-computer interfaces?
7. What is whole brain emulation?
8. What is general intelligence? [w/ explanation of why ‘optimization power’ may less confusing than ‘intelligence’, which tempts anthropomorphic bias]
9. What is greater-than-human intelligence?
10. What is superintelligence, and what powers might it have?
2. The Need for Friendly AI
1. What are the paths to an intelligence explosion?
2. When might an intelligence explosion occur?
3. What are AI takeoff scenarios?
4. What are the likely consequences of an intelligence explosion? [survey of possible effects, good and bad]
5. Can we just keep the machine superintelligence in a box, with no access to the internet?
6. Can we just create an Oracle AI that informs us but doesn’t do anything?
7. Can we just program machines not to harm us?
8. Can we program a machine superintelligence to maximize human pleasure or desire satisfaction?
9. Can we teach a machine superintelligence a moral code with machine learning?
10. Won’t some other sophisticated system constrain AGI behavior?
3. Coherent Extrapolated Volition
1. What is Coherent Extrapolated Volition (CEV)?
2. …
4. Alternatives to CEV
1. …
5. Open Problems in Friendly AI Research
1. What is reflective decision theory?
2. What is timeless decision theory?
3. How can an AI preserve its utility function throughout ontological shifts?
4. How can an AI have preferences over the external world?
5. How can an AI choose an ideal prior given infinite computing power?
6. How can an AI deal with logical uncertainty?
7. How can we elicit a utility function from human behavior and function?
8. How can we develop microeconomic models for self-improving systems?
9. How can temporal, bounded agents approximate ideal Bayesianism?
Questions for a Friendly AI FAQ
I’ve begun work (with a few others) on a somewhat comprehensive Friendly AI F.A.Q. The answers will be much longer and more detailed than in the Singularity FAQ. I’d appreciate feedback on which questions should be added.
1. Friendly AI: History and Concepts
1. What is Friendly AI?
2. What is the Singularity? [w/ explanation of all three types]
3. What is the history of the Friendly AI Concept?
4. What is nanotechnology?
5. What is biological cognitive enhancement?
6. What are brain-computer interfaces?
7. What is whole brain emulation?
8. What is general intelligence? [w/ explanation of why ‘optimization power’ may less confusing than ‘intelligence’, which tempts anthropomorphic bias]
9. What is greater-than-human intelligence?
10. What is superintelligence, and what powers might it have?
2. The Need for Friendly AI
1. What are the paths to an intelligence explosion?
2. When might an intelligence explosion occur?
3. What are AI takeoff scenarios?
4. What are the likely consequences of an intelligence explosion? [survey of possible effects, good and bad]
5. Can we just keep the machine superintelligence in a box, with no access to the internet?
6. Can we just create an Oracle AI that informs us but doesn’t do anything?
7. Can we just program machines not to harm us?
8. Can we program a machine superintelligence to maximize human pleasure or desire satisfaction?
9. Can we teach a machine superintelligence a moral code with machine learning?
10. Won’t some other sophisticated system constrain AGI behavior?
3. Coherent Extrapolated Volition
1. What is Coherent Extrapolated Volition (CEV)?
2. …
4. Alternatives to CEV
1. …
5. Open Problems in Friendly AI Research
1. What is reflective decision theory?
2. What is timeless decision theory?
3. How can an AI preserve its utility function throughout ontological shifts?
4. How can an AI have preferences over the external world?
5. How can an AI choose an ideal prior given infinite computing power?
6. How can an AI deal with logical uncertainty?
7. How can we elicit a utility function from human behavior and function?
8. How can we develop microeconomic models for self-improving systems?
9. How can temporal, bounded agents approximate ideal Bayesianism?