I agree with you, which is why I’m interested in generalizing to axias and making room in the theory for internal inconsistency. In my view current decision theoretic approaches ignore this which, while fine for now, will eventually be a problem that will need to be addressed either by humans or AI itself.
I agree with you, which is why I’m interested in generalizing to axias and making room in the theory for internal inconsistency. In my view current decision theoretic approaches ignore this which, while fine for now, will eventually be a problem that will need to be addressed either by humans or AI itself.