I think that a problem with my solution is that how can the AI “understand” the behaviors and thought-processes of a “more powerful agent.” If you know what someone smarter than you would think then you are simply that smart. If we abstract the specific more-powerful-agent’s-thoughts away, then we are left with Kantian ethics, and we are back where we started, trying to put ethics/morals in the AI.
It’s a bit rude to call my idea so stupid that I must not have thought about it for more than five minutes, but thanks for your advice anyways. It is good advice.
I think that a problem with my solution is that how can the AI “understand” the behaviors and thought-processes of a “more powerful agent.” If you know what someone smarter than you would think then you are simply that smart. If we abstract the specific more-powerful-agent’s-thoughts away, then we are left with Kantian ethics, and we are back where we started, trying to put ethics/morals in the AI.
It’s a bit rude to call my idea so stupid that I must not have thought about it for more than five minutes, but thanks for your advice anyways. It is good advice.
I didn’t intend this.