The Halting problem is a worst case result. Most agents aren’t maximally ambiguous about whether or not they halt. And those that are, well then it depends what the rules are for agents that don’t halt.
There are set ups where each agent is using an nonphysically large but finite amount of compute. There was a paper I saw somewhere a while ago where both agents were doing a brute force proof search for the statement “if I cooperate, then they cooperate” and cooperating if they found a proof.
(Ie searching all proofs containing <10^100 symbols)
There are set ups where each agent is using an nonphysically large but finite amount of compute.
In a situation where you are asking a question about an ideal reasoner, having the agents be finite means you are no longer asking it about an ideal reasoner. If you put an ideal reasoner in a Newcomb problem, he may very well think “I’ll simulate Omega and act according to what I find”. (Or more likely, some more complicated algorithm that indirectly amounts to that.) If the agent can’t do this, he may not be able to solve the problem. Of course, real humans can’t, but this may just mean that real humans are, because they are finite, unable to solve some problems.
I get the impression that “has the agent’s source code” is some Yudkowskyism which people use without thinking.
Every time someone says that, I always wonder “are you claiming that the agent that reads the source code is able to solve the Halting Problem?”
The Halting problem is a worst case result. Most agents aren’t maximally ambiguous about whether or not they halt. And those that are, well then it depends what the rules are for agents that don’t halt.
There are set ups where each agent is using an nonphysically large but finite amount of compute. There was a paper I saw somewhere a while ago where both agents were doing a brute force proof search for the statement “if I cooperate, then they cooperate” and cooperating if they found a proof.
(Ie searching all proofs containing <10^100 symbols)
In a situation where you are asking a question about an ideal reasoner, having the agents be finite means you are no longer asking it about an ideal reasoner. If you put an ideal reasoner in a Newcomb problem, he may very well think “I’ll simulate Omega and act according to what I find”. (Or more likely, some more complicated algorithm that indirectly amounts to that.) If the agent can’t do this, he may not be able to solve the problem. Of course, real humans can’t, but this may just mean that real humans are, because they are finite, unable to solve some problems.