Well yes. That’s sort of the problem with building one. Utility functions are certainly useful for specifying where logical uncertainty should be reduced.
Well, I don’t know about the precise construction that would be used. Certainly I could see a human being deliberately focusing the system on some things rather than others.
Well yes. That’s sort of the problem with building one. Utility functions are certainly useful for specifying where logical uncertainty should be reduced.
Well, ok, but if you agree with this then I don’t see how you can claim that such a system would be particularly useful for solving FAI problems.
Well, I don’t know about the precise construction that would be used. Certainly I could see a human being deliberately focusing the system on some things rather than others.