Two agents in a PD can find a reason to cooperate in proving (deciding) that their decision algorithms are equivalent to some third algorithm that is the same for both agents (in which case they can see that their decision is the same, and so (C,C) is better than (D,D)). This common algorithm could be seen as a kind of focal point that both agents want to arrive at.
I don’t think it matters much, but the specific agents I had in mind were perhaps two subagents/subalgorithms (contingent instantiations? non-Platonic instantiations?) both “derived” (logically/acausally) from some class of variably probable unknown-to-them but less-contingent creator agents/algorithms (and the subagents have a decision theory that ‘cares’ about creator/creation symmetry or summat, e.g., causally speaking, there should be no arbitrary discontinuous decision policy timestamping). There may be multiple possible focal points and it may be tricky to correctly treat the logical uncertainty.
All of that to imply that the focus shouldn’t be determining some focal point for the universe, if that means anything, but focal points in algorithmspace, which is probably way more important.
You’ve talked about similar things yourself in the context of game semantics / abstract interpretation / time-symmetric perceptions/actions. I’d be interested in Skype convo-ing with you now that I have an iPhone and thus a microphone. I’m very interested in what you’re working on, especially given recent events. Your emphasis on semantics has always struck me as well-founded. I have done a fair amount of speculation about how an AI (a Goedel machine, say) crossing the ‘self-understanding’/‘self-improving’/Turing-universal/general-intelligence/semantic boundary would transition from syntactic symbol manipulator to semantic goal optimizer and what that would imply about how it it would interpret the ‘actual’ semantics of the Lisp tokens that the humans would identify as its ‘utility function’. If you don’t think about that much then I’d like to convince you that you should, considering that it is on the verge of technicality and also potentially very important for Shulman-esque singularity game theory.
The idea is that having exactly the same or similar algorithms to agents is enormously good, due to a proliferation of true PDs, and that therefore even non-game-theoretic parts of algorithms should be designed, whenever possible, to mimic other agents.
However applying this argument to utility functions seems a bit over-the-top. Considering that whether or not something is a PD depends on your utility function, altering the utility function to win at PDs should be counter-productive. If that makes sense, we need better decision theories.
Two agents in a PD can find a reason to cooperate in proving (deciding) that their decision algorithms are equivalent to some third algorithm that is the same for both agents (in which case they can see that their decision is the same, and so (C,C) is better than (D,D)). This common algorithm could be seen as a kind of focal point that both agents want to arrive at.
I don’t think it matters much, but the specific agents I had in mind were perhaps two subagents/subalgorithms (contingent instantiations? non-Platonic instantiations?) both “derived” (logically/acausally) from some class of variably probable unknown-to-them but less-contingent creator agents/algorithms (and the subagents have a decision theory that ‘cares’ about creator/creation symmetry or summat, e.g., causally speaking, there should be no arbitrary discontinuous decision policy timestamping). There may be multiple possible focal points and it may be tricky to correctly treat the logical uncertainty.
All of that to imply that the focus shouldn’t be determining some focal point for the universe, if that means anything, but focal points in algorithmspace, which is probably way more important.
Ah, I see.
(I, on the other hand, don’t.)
You’ve talked about similar things yourself in the context of game semantics / abstract interpretation / time-symmetric perceptions/actions. I’d be interested in Skype convo-ing with you now that I have an iPhone and thus a microphone. I’m very interested in what you’re working on, especially given recent events. Your emphasis on semantics has always struck me as well-founded. I have done a fair amount of speculation about how an AI (a Goedel machine, say) crossing the ‘self-understanding’/‘self-improving’/Turing-universal/general-intelligence/semantic boundary would transition from syntactic symbol manipulator to semantic goal optimizer and what that would imply about how it it would interpret the ‘actual’ semantics of the Lisp tokens that the humans would identify as its ‘utility function’. If you don’t think about that much then I’d like to convince you that you should, considering that it is on the verge of technicality and also potentially very important for Shulman-esque singularity game theory.
The idea is that having exactly the same or similar algorithms to agents is enormously good, due to a proliferation of true PDs, and that therefore even non-game-theoretic parts of algorithms should be designed, whenever possible, to mimic other agents.
However applying this argument to utility functions seems a bit over-the-top. Considering that whether or not something is a PD depends on your utility function, altering the utility function to win at PDs should be counter-productive. If that makes sense, we need better decision theories.