...and then explain exactly why it is that an AI cannot have a fixed goal architecture.
He answered that in the interview.
Yes, but the answer was:
If intelligence turns out to be adaptive (as believed by me and many others), then a “friendly AI” will be mainly the result of proper education, not proper design. There will be no way to design a “safe AI”, just like there is no way to require parents to only give birth to “safe baby” who will never become a criminal.
...which is pretty incoherent. His reference for this appears to be himself here and here. This material is also not very convincing. No doubt critics will find the section on “AI Ethics” in the second link revealing.
Yes, but the answer was:
...which is pretty incoherent. His reference for this appears to be himself here and here. This material is also not very convincing. No doubt critics will find the section on “AI Ethics” in the second link revealing.