Right. So your point is to antipredict the assumption that if agent is generating statements, then it’s a formal system. The argument is to point out that there seems to be no clear sense in which being a formal system is optimal, since there are lots of strategies that break this property, so a good agent probably doesn’t just parrot a complete list of some formal system’s output. I agree, though seeing agent’s output as logical statements at all still doesn’t seem like a natural or useful interpretation. I guess my next question would be about motivation for considering that setting.
I tend to think about and talk about agents’ “beliefs,” and apply various intuitions about my own beliefs to decision theoretic problems; this setting is designed to better inform some of those intuitions (in particular, it shifts the border on some conflicts between my naive intuitions and incompleteness results).
Right. So your point is to antipredict the assumption that if agent is generating statements, then it’s a formal system. The argument is to point out that there seems to be no clear sense in which being a formal system is optimal, since there are lots of strategies that break this property, so a good agent probably doesn’t just parrot a complete list of some formal system’s output. I agree, though seeing agent’s output as logical statements at all still doesn’t seem like a natural or useful interpretation. I guess my next question would be about motivation for considering that setting.
I tend to think about and talk about agents’ “beliefs,” and apply various intuitions about my own beliefs to decision theoretic problems; this setting is designed to better inform some of those intuitions (in particular, it shifts the border on some conflicts between my naive intuitions and incompleteness results).
This didn’t clarify the situation for me.