If you grafted absolute power onto a human with average ethical insight, you might get absolute corruption. But what is that analogous to in .AI terms? Why assume asymmetric development by default?
If you assume top down singleton AI with a walled of ethics module, things look difficult. If you reverse this assumptions, FAI is already happening.
There is more than one sense to friendly .AI.
If you grafted absolute power onto a human with average ethical insight, you might get absolute corruption. But what is that analogous to in .AI terms? Why assume asymmetric development by default?
If you assume top down singleton AI with a walled of ethics module, things look difficult. If you reverse this assumptions, FAI is already happening.