If intelligence turns out to be adaptive (as believed by me and many others), then a “friendly AI” will be mainly the result of proper education, not proper design. There will be no way to design a “safe AI”, just like there is no way to require parents to only give birth to “safe baby” who will never become a criminal.
I don’t think that follows. What consumer robot makers will want will be the equivalent of a “safe baby”—who will practically never become a criminal. That will require a tamper-proof brain, and many other safety features. Robot builders won’t want to see their robots implicated in crimes. There’s no law that says this is impossible, and that’s because it is possible.
Machines don’t really distinguish between the results of education and apriori knowledge. That’s because you can clone adult minds—which effectively blurs the distinction.
I don’t think that follows. What consumer robot makers will want will be the equivalent of a “safe baby”—who will practically never become a criminal. That will require a tamper-proof brain, and many other safety features. Robot builders won’t want to see their robots implicated in crimes. There’s no law that says this is impossible, and that’s because it is possible.
Machines don’t really distinguish between the results of education and apriori knowledge. That’s because you can clone adult minds—which effectively blurs the distinction.
Clone? Maybe. Hopefully. Create from scratch? Not so sure.
I meant that you can clone adult machine minds there.