It’s also possible to commit to not updating on a specific piece of information with a specific probability p between 0 and 1. I could also have arbitrarily complex finite commitment structures such as “out of the set of bits {A, B, C, D, E}, I will update if and only if I learn that at least three of them are true” — something which could of course be represented by a separate bit derived from A, B, C, D, E in the standard three-valued logic that represents true, false, and unknown. Or I can do a “provisional commit” where I have decided not to update on a certain fact, and generally won’t, but may under some circumstances run some computationally expensive operation to decide to uncommit. Whether or not I’m actually committed is then theoretically determinable, but may in practice have a significant minimal computational cost and/or informational requirements to determine (ones that I might sometimes have a motive to intentionally increase, if I wish to be hard-to-predict), so to some other computationally bounded or non-factually-omniscient agents this may be unknown.
Yep! I hadn’t included pure randomization in the formalism, but it can be done and will yield some interesting insights.
As you mention, we can also include pseudo-randomization. And taking these bounded rationality considerations into account also makes our reasoning richer and more complex: it’s unclear exactly when an agent wants to obfuscate its reasoning from others, etc.
It’s also possible to commit to not updating on a specific piece of information with a specific probability p between 0 and 1. I could also have arbitrarily complex finite commitment structures such as “out of the set of bits {A, B, C, D, E}, I will update if and only if I learn that at least three of them are true” — something which could of course be represented by a separate bit derived from A, B, C, D, E in the standard three-valued logic that represents true, false, and unknown. Or I can do a “provisional commit” where I have decided not to update on a certain fact, and generally won’t, but may under some circumstances run some computationally expensive operation to decide to uncommit. Whether or not I’m actually committed is then theoretically determinable, but may in practice have a significant minimal computational cost and/or informational requirements to determine (ones that I might sometimes have a motive to intentionally increase, if I wish to be hard-to-predict), so to some other computationally bounded or non-factually-omniscient agents this may be unknown.
Yep! I hadn’t included pure randomization in the formalism, but it can be done and will yield some interesting insights.
As you mention, we can also include pseudo-randomization. And taking these bounded rationality considerations into account also makes our reasoning richer and more complex: it’s unclear exactly when an agent wants to obfuscate its reasoning from others, etc.