It’s been ages since I studied provability logic, but those bots look suspicious to me. Have anybody actually formalized them? Like the definition of FairBot involves itself, so it is not actually a definition. Is it a statement that we consider true? Is it just a statement that we consider provable? Why won’t adding something like this to GL result in contradiction?
Like the definition of FairBot involves itself, so it is not actually a definition.
Ah, you are part of today’s Lucky 10,000. Quines are a thing.
It is possible to write a program that does arbitrary computation on its own source code, by doing variants of the above. (That being said, you have to be careful to avoid running into Halting-problem variants.)
Thank you for treating it as a “today’s lucky 10,000” event. I am aware about quines (though not much more than just ‘aware’) and what I am worried about is whether people that created FairBot were careful enough.
I haven’t spotted any inherent contradictions in the FairBot variants thus far.
That being said, now that I look at it there’s no guarantee that said FairBot variants will ever halt… in which case they aren’t proper agents, in much the same way that while (true) {} isn’t a proper agent.
I’ve seen ‘they run for some large amount of time/possible proofs, then they switch to their codepath for no proof found’. Not in code, but in a paper that proved or tried to prove something like this:
I assert that this agent will Cooperate with itself. When you ask me why, I mutter something about the ‘Assumable Provability Theorem’, and declare that this means ‘if something is provable within a proofsystem, starting from the premise that the quoted statement is provable inside the quoted proofsystem, then it’s thereby provable within the system’, and therefore that Suspicious-FairBot will Cooperate with itself.
except with the ‘searching for a longtime/lots of proofs condition’, rather than searching forever.
I’ve seen ‘they run for some large amount of time/possible proofs, then they switch to their codepath for no proof found’
Sure… in which case there’s a third option, where they there is no proof that they cooperate/don’t cooperate with themselves precisely because they have limited runtime and so stop looking at proofs ‘prematurely’.
(So, for instance, there might be a proof that an agent that searched for 22000 clock cycles would cooperate with itself, and so it satisfies Lob’s Theorem… but no easier-to-find proof.)
Your quoted bit about Lob’s Theorem says nothing about how complex said proof will be.
“Definition” was probably a wrong word to use. Since we are talking in the context of provability, I meant “a short string of text that replaces a longer string of text for ease of human reader, but is parsed as a longer string of text when you actually work with it”. Impredicative definitions are indeed quite common, but they go hand in hand with proofs of their consistency, like proof that a functional equation have a solution, or example of a group to prove that group axioms are consistent, or more generally a model of some axiom system.
Sadly I am not familiar with Haskell, so your link is of limited use to me. But it seems to contain a lot of code and no proofs, so it is probably not what I am looking for anyway.
What I am looking for probably looks like a proof of “GL⊢(∃f∀g(f(g)=□g(f)))”. I am in many ways uncertain about whether this is the right formula (is GL a right system to use here (does it even support quantifiers over functional symbols? if not then there should be an extension that does support it); is “does f exist” the right question to be asking here; does “f(g)=□g(f)” correctly describe what we want rom FairBot). But some proof of that kind should exists, overwise why should we think that such FairBot exists/is consistent?
It’s been ages since I studied provability logic, but those bots look suspicious to me. Have anybody actually formalized them? Like the definition of FairBot involves itself, so it is not actually a definition. Is it a statement that we consider true? Is it just a statement that we consider provable? Why won’t adding something like this to GL result in contradiction?
Ah, you are part of today’s Lucky 10,000. Quines are a thing.
It is possible to write a program that does arbitrary computation on its own source code, by doing variants of the above. (That being said, you have to be careful to avoid running into Halting-problem variants.)
Thank you for treating it as a “today’s lucky 10,000” event. I am aware about quines (though not much more than just ‘aware’) and what I am worried about is whether people that created FairBot were careful enough.
I haven’t spotted any inherent contradictions in the FairBot variants thus far.
That being said, now that I look at it there’s no guarantee that said FairBot variants will ever halt… in which case they aren’t proper agents, in much the same way that while (true) {} isn’t a proper agent.
I’ve seen ‘they run for some large amount of time/possible proofs, then they switch to their codepath for no proof found’. Not in code, but in a paper that proved or tried to prove something like this:
except with the ‘searching for a longtime/lots of proofs condition’, rather than searching forever.
It might have been this one:
http://intelligence.org/files/ParametricBoundedLobsTheorem.pdf
Sure… in which case there’s a third option, where they there is no proof that they cooperate/don’t cooperate with themselves precisely because they have limited runtime and so stop looking at proofs ‘prematurely’.
(So, for instance, there might be a proof that an agent that searched for 22000 clock cycles would cooperate with itself, and so it satisfies Lob’s Theorem… but no easier-to-find proof.)
Your quoted bit about Lob’s Theorem says nothing about how complex said proof will be.
Yes, you can play with them yourself: https://github.com/machine-intelligence/provability
Impredicative definitions are quite common in mathematics.
“Definition” was probably a wrong word to use. Since we are talking in the context of provability, I meant “a short string of text that replaces a longer string of text for ease of human reader, but is parsed as a longer string of text when you actually work with it”. Impredicative definitions are indeed quite common, but they go hand in hand with proofs of their consistency, like proof that a functional equation have a solution, or example of a group to prove that group axioms are consistent, or more generally a model of some axiom system.
Sadly I am not familiar with Haskell, so your link is of limited use to me. But it seems to contain a lot of code and no proofs, so it is probably not what I am looking for anyway.
What I am looking for probably looks like a proof of “GL⊢(∃f ∀g (f(g)=□ g(f)))”. I am in many ways uncertain about whether this is the right formula (is GL a right system to use here (does it even support quantifiers over functional symbols? if not then there should be an extension that does support it); is “does f exist” the right question to be asking here; does “f(g)=□g(f)” correctly describe what we want rom FairBot). But some proof of that kind should exists, overwise why should we think that such FairBot exists/is consistent?