Thank you for treating it as a “today’s lucky 10,000” event. I am aware about quines (though not much more than just ‘aware’) and what I am worried about is whether people that created FairBot were careful enough.
I haven’t spotted any inherent contradictions in the FairBot variants thus far.
That being said, now that I look at it there’s no guarantee that said FairBot variants will ever halt… in which case they aren’t proper agents, in much the same way that while (true) {} isn’t a proper agent.
I’ve seen ‘they run for some large amount of time/possible proofs, then they switch to their codepath for no proof found’. Not in code, but in a paper that proved or tried to prove something like this:
I assert that this agent will Cooperate with itself. When you ask me why, I mutter something about the ‘Assumable Provability Theorem’, and declare that this means ‘if something is provable within a proofsystem, starting from the premise that the quoted statement is provable inside the quoted proofsystem, then it’s thereby provable within the system’, and therefore that Suspicious-FairBot will Cooperate with itself.
except with the ‘searching for a longtime/lots of proofs condition’, rather than searching forever.
I’ve seen ‘they run for some large amount of time/possible proofs, then they switch to their codepath for no proof found’
Sure… in which case there’s a third option, where they there is no proof that they cooperate/don’t cooperate with themselves precisely because they have limited runtime and so stop looking at proofs ‘prematurely’.
(So, for instance, there might be a proof that an agent that searched for 22000 clock cycles would cooperate with itself, and so it satisfies Lob’s Theorem… but no easier-to-find proof.)
Your quoted bit about Lob’s Theorem says nothing about how complex said proof will be.
Thank you for treating it as a “today’s lucky 10,000” event. I am aware about quines (though not much more than just ‘aware’) and what I am worried about is whether people that created FairBot were careful enough.
I haven’t spotted any inherent contradictions in the FairBot variants thus far.
That being said, now that I look at it there’s no guarantee that said FairBot variants will ever halt… in which case they aren’t proper agents, in much the same way that while (true) {} isn’t a proper agent.
I’ve seen ‘they run for some large amount of time/possible proofs, then they switch to their codepath for no proof found’. Not in code, but in a paper that proved or tried to prove something like this:
except with the ‘searching for a longtime/lots of proofs condition’, rather than searching forever.
It might have been this one:
http://intelligence.org/files/ParametricBoundedLobsTheorem.pdf
Sure… in which case there’s a third option, where they there is no proof that they cooperate/don’t cooperate with themselves precisely because they have limited runtime and so stop looking at proofs ‘prematurely’.
(So, for instance, there might be a proof that an agent that searched for 22000 clock cycles would cooperate with itself, and so it satisfies Lob’s Theorem… but no easier-to-find proof.)
Your quoted bit about Lob’s Theorem says nothing about how complex said proof will be.