Human beings aren’t friendly, in the Friendly-AI sense. If a random human acquired immense power, it would probably result in an existential catastrophe. Humans do have a better sense of human value than, say, a can-opener does; they have more power and autonomy than a can-opener, so they need fuller access to human values in order to reach similar safety levels. A superintelligent AI would require even more access to human values to reach comparable safety levels.
If you grafted absolute power onto a human with average ethical insight, you might get absolute corruption. But what is that analogous to in .AI terms? Why assume asymmetric development by default?
If you assume top down singleton AI with a walled of ethics module, things look difficult. If you reverse this assumptions, FAI is already happening.
Human beings aren’t friendly, in the Friendly-AI sense. If a random human acquired immense power, it would probably result in an existential catastrophe. Humans do have a better sense of human value than, say, a can-opener does; they have more power and autonomy than a can-opener, so they need fuller access to human values in order to reach similar safety levels. A superintelligent AI would require even more access to human values to reach comparable safety levels.
There is more than one sense to friendly .AI.
If you grafted absolute power onto a human with average ethical insight, you might get absolute corruption. But what is that analogous to in .AI terms? Why assume asymmetric development by default?
If you assume top down singleton AI with a walled of ethics module, things look difficult. If you reverse this assumptions, FAI is already happening.