I’d love to see a worked example. The cases I come up with are all practice for or demonstrations of feasibility for casual normal trade/interactions.
I think I know at least some of the examples you refer to. I think the causality in these cases is a shared past of the agents making the trade. But I’m not sure that breaks the argument in cases where the agents involved are not aware of that, for example but not limited to, having forgotten about it or intentionally removed the memory.
There is convoluted-causality in a lot of trust relationships. “I trust this transaction because most people are honest in this situation”, which works BECAUSE most people are, in fact, honest in that situation. And being honest does (slightly) reinforce that for future transactions, including transactions between strangers which get easier only to the degree they’re similar to you.
But, while complex and involving human social norms and “prediction”, it’s not comparable to Newcomb (one-shot, high-stakes, no side-effects) or acausal trade (zero-shot, no path to specific knowledge of outcome).
Common social knowledge has predictive power and causal pathways to update the knowledge (and others’ knowledge of the social averages which contain you). Acausal trade isn’t even sharing the same physical universe - it’s pure theory, with no way to adjust over time.
“Casual norm trade/interactions” does seem like most of the obvious example-space. The generator for this thought comes from chatting with Andrew Critch. See this post for some reference: http://acritch.com/deserving-trust/
Typo: s/casual/causal/ - these seem to be diffuse reputation cases, where one recognizes that signaling is leaky, and it’s more effective to be trustworthy than to only appear trustworthy. Not for subtle Newcombe or acausal reasons, but for highly evolved betrayal detection mechanisms.
I’d love to see a worked example. The cases I come up with are all practice for or demonstrations of feasibility for casual normal trade/interactions.
I think I know at least some of the examples you refer to. I think the causality in these cases is a shared past of the agents making the trade. But I’m not sure that breaks the argument in cases where the agents involved are not aware of that, for example but not limited to, having forgotten about it or intentionally removed the memory.
There is convoluted-causality in a lot of trust relationships. “I trust this transaction because most people are honest in this situation”, which works BECAUSE most people are, in fact, honest in that situation. And being honest does (slightly) reinforce that for future transactions, including transactions between strangers which get easier only to the degree they’re similar to you.
But, while complex and involving human social norms and “prediction”, it’s not comparable to Newcomb (one-shot, high-stakes, no side-effects) or acausal trade (zero-shot, no path to specific knowledge of outcome).
In which way is sharing some common social knowledge relevantly different from sharing the same physical universe?
Common social knowledge has predictive power and causal pathways to update the knowledge (and others’ knowledge of the social averages which contain you). Acausal trade isn’t even sharing the same physical universe - it’s pure theory, with no way to adjust over time.
“Casual norm trade/interactions” does seem like most of the obvious example-space. The generator for this thought comes from chatting with Andrew Critch. See this post for some reference: http://acritch.com/deserving-trust/
Typo: s/casual/causal/ - these seem to be diffuse reputation cases, where one recognizes that signaling is leaky, and it’s more effective to be trustworthy than to only appear trustworthy. Not for subtle Newcombe or acausal reasons, but for highly evolved betrayal detection mechanisms.