Bayesian agents are logically omniscient, and I think a large fraction of deceptive practices rely on asymmetries in computation time between two agents with access to slightly different information (like generating a lie and checking the consistencies between this new statement and all my previous statements)
My sense is also that two-player games with bayesian agents are actually underspecified and give rise to all kinds of weird things due to the necessity for infinite regress (i.e. an agent modeling the other agent modeling themselves modeling the other agent, etc.), which doesn’t actually reliably converge, though I am not confident. A lot of decision-theory seems to do weird things with bayesian agents.
So overall, not sure how well you can prove theorems in this space, without having made a lot of progress in decision-theory, and I expect the resolution to a lot of our confusions in decision-theory to be resolved by moving away from bayesianism.
Hm… I like the idea of an agent deceiving another due to it’s bounds on computational time, but could imagine many stable (though smaller) solutions that wouldn’t. I’m curious if a good bayesian agent could do “almost perfect” on many questions given limited computation. For instance, a good bayesian would be using bayesianism to semi-optimally use any set of computation (assuming it has some sort of intuition, which I assume is necessary?)
On being underspecified, it seems to me like in general our models of agent cognition forever have been pretty underspecified, so would definitely agree here. “Ideal” bayesian agents are somewhat ridiculously overpowered and unrealistic.
I found the simulations around ProbMods to be interesting at modeling similar things; I think I’d like to see a lot more simulations for this kind of work.
https://probmods.org/
Bayesian agents are logically omniscient, and I think a large fraction of deceptive practices rely on asymmetries in computation time between two agents with access to slightly different information (like generating a lie and checking the consistencies between this new statement and all my previous statements)
My sense is also that two-player games with bayesian agents are actually underspecified and give rise to all kinds of weird things due to the necessity for infinite regress (i.e. an agent modeling the other agent modeling themselves modeling the other agent, etc.), which doesn’t actually reliably converge, though I am not confident. A lot of decision-theory seems to do weird things with bayesian agents.
So overall, not sure how well you can prove theorems in this space, without having made a lot of progress in decision-theory, and I expect the resolution to a lot of our confusions in decision-theory to be resolved by moving away from bayesianism.
Hm… I like the idea of an agent deceiving another due to it’s bounds on computational time, but could imagine many stable (though smaller) solutions that wouldn’t. I’m curious if a good bayesian agent could do “almost perfect” on many questions given limited computation. For instance, a good bayesian would be using bayesianism to semi-optimally use any set of computation (assuming it has some sort of intuition, which I assume is necessary?)
On being underspecified, it seems to me like in general our models of agent cognition forever have been pretty underspecified, so would definitely agree here. “Ideal” bayesian agents are somewhat ridiculously overpowered and unrealistic.
I found the simulations around ProbMods to be interesting at modeling similar things; I think I’d like to see a lot more simulations for this kind of work. https://probmods.org/