I am not really trying to endorse one construction or another.
The point is: if you are playing the game “say as many true things as you can without messing up” and you are good at it, you don’t end up saying a set of things which is closed under logical inference.
It is quite possible that this is consistent with your intuitions, but it wasn’t consistent with mine until making this observation (and I would guess that most mathematicians would guess that you should say all logical implications of anything you say).
Right. So your point is to antipredict the assumption that if agent is generating statements, then it’s a formal system. The argument is to point out that there seems to be no clear sense in which being a formal system is optimal, since there are lots of strategies that break this property, so a good agent probably doesn’t just parrot a complete list of some formal system’s output. I agree, though seeing agent’s output as logical statements at all still doesn’t seem like a natural or useful interpretation. I guess my next question would be about motivation for considering that setting.
I tend to think about and talk about agents’ “beliefs,” and apply various intuitions about my own beliefs to decision theoretic problems; this setting is designed to better inform some of those intuitions (in particular, it shifts the border on some conflicts between my naive intuitions and incompleteness results).
I am not really trying to endorse one construction or another.
The point is: if you are playing the game “say as many true things as you can without messing up” and you are good at it, you don’t end up saying a set of things which is closed under logical inference.
It is quite possible that this is consistent with your intuitions, but it wasn’t consistent with mine until making this observation (and I would guess that most mathematicians would guess that you should say all logical implications of anything you say).
Right. So your point is to antipredict the assumption that if agent is generating statements, then it’s a formal system. The argument is to point out that there seems to be no clear sense in which being a formal system is optimal, since there are lots of strategies that break this property, so a good agent probably doesn’t just parrot a complete list of some formal system’s output. I agree, though seeing agent’s output as logical statements at all still doesn’t seem like a natural or useful interpretation. I guess my next question would be about motivation for considering that setting.
I tend to think about and talk about agents’ “beliefs,” and apply various intuitions about my own beliefs to decision theoretic problems; this setting is designed to better inform some of those intuitions (in particular, it shifts the border on some conflicts between my naive intuitions and incompleteness results).
This didn’t clarify the situation for me.