You’ve talked about similar things yourself in the context of game semantics / abstract interpretation / time-symmetric perceptions/actions. I’d be interested in Skype convo-ing with you now that I have an iPhone and thus a microphone. I’m very interested in what you’re working on, especially given recent events. Your emphasis on semantics has always struck me as well-founded. I have done a fair amount of speculation about how an AI (a Goedel machine, say) crossing the ‘self-understanding’/‘self-improving’/Turing-universal/general-intelligence/semantic boundary would transition from syntactic symbol manipulator to semantic goal optimizer and what that would imply about how it it would interpret the ‘actual’ semantics of the Lisp tokens that the humans would identify as its ‘utility function’. If you don’t think about that much then I’d like to convince you that you should, considering that it is on the verge of technicality and also potentially very important for Shulman-esque singularity game theory.
The idea is that having exactly the same or similar algorithms to agents is enormously good, due to a proliferation of true PDs, and that therefore even non-game-theoretic parts of algorithms should be designed, whenever possible, to mimic other agents.
However applying this argument to utility functions seems a bit over-the-top. Considering that whether or not something is a PD depends on your utility function, altering the utility function to win at PDs should be counter-productive. If that makes sense, we need better decision theories.
Ah, I see.
(I, on the other hand, don’t.)
You’ve talked about similar things yourself in the context of game semantics / abstract interpretation / time-symmetric perceptions/actions. I’d be interested in Skype convo-ing with you now that I have an iPhone and thus a microphone. I’m very interested in what you’re working on, especially given recent events. Your emphasis on semantics has always struck me as well-founded. I have done a fair amount of speculation about how an AI (a Goedel machine, say) crossing the ‘self-understanding’/‘self-improving’/Turing-universal/general-intelligence/semantic boundary would transition from syntactic symbol manipulator to semantic goal optimizer and what that would imply about how it it would interpret the ‘actual’ semantics of the Lisp tokens that the humans would identify as its ‘utility function’. If you don’t think about that much then I’d like to convince you that you should, considering that it is on the verge of technicality and also potentially very important for Shulman-esque singularity game theory.
The idea is that having exactly the same or similar algorithms to agents is enormously good, due to a proliferation of true PDs, and that therefore even non-game-theoretic parts of algorithms should be designed, whenever possible, to mimic other agents.
However applying this argument to utility functions seems a bit over-the-top. Considering that whether or not something is a PD depends on your utility function, altering the utility function to win at PDs should be counter-productive. If that makes sense, we need better decision theories.