… Why does it matter to an AI operating within a system p if proposition C is true? Shouldn’t we need to create the most powerful sound system p, and then instead of caring about what is true, care about what is provable?
EDIT: Or instead of tracking ‘truth’, track ‘belief’. “If this logical systems proves proposition C, then C is believed.”, where ‘believed’ is a lower standard than proved (Given a proof of “If a than b” and belief in a, the result is only a belief in b)
… Why does it matter to an AI operating within a system p if proposition C is true? Shouldn’t we need to create the most powerful sound system p, and then instead of caring about what is true, care about what is provable?
EDIT: Or instead of tracking ‘truth’, track ‘belief’. “If this logical systems proves proposition C, then C is believed.”, where ‘believed’ is a lower standard than proved (Given a proof of “If a than b” and belief in a, the result is only a belief in b)
Is there an internal benefit to something being provable besides it being true?
It’s a lot easier to tell if something is provable than if it is true.