The following is an honest non-rhetorical question: Is it not misleading to use the word ‘cooperation’ as you seem to be using it here? Don’t you still get ‘cooperation’ in this sense if the subsets of agents are not causally interacting with each other (say) but have still semi-Platonicly ‘merged’ via implicit logical interaction as compared to some wider context of decision algorithms that by logical necessity exhibit comparatively less merging? This sets up a situation where an agent can (even accidentally) engineer ‘Pareto improvements’ just by improving its decision algorithm (or more precisely replacing ‘its’ decision algorithm (everywhere ‘it’ is instantiated, of course...) with a new one that has the relevant properties of a new, possibly very different logical reference class). It’s a total bastardization of the concept of trade but it seems to be enough to result in some acausal economy (er, that is, some positive-affect-laden mysterious timeless attractor simultaneously constructed and instantiated by timeful interaction) or ‘global cooperation’ as you put it, and yet despite all that timeless interaction there are many ways it could turn out that would not look to our flawed timeful minds like cooperation. I don’t trust my intuitions about what ‘cooperation’ would look like at levels of organization or intelligence much different from my own, so I’m hesitant to use the word.
(I realize this is ‘debating definitions’ but connotations matter a lot when everything is so fuzzily abstract and yet somewhat affect-laden, I think. And anyway I’m not sure I’m actually debating definitions because I might be missing an important property of Pareto improvements that makes their application to agents that are logical-property-shifting-over-time not only a useless analogy but a confused one.)
This question is partially prompted by your post about the use of the word ‘blackmail’ as if it was technically clear and not just intuitively clear which interactions are blackmail, trade, cooperation, et cetera, outside of human social perception (which is of course probably correlated with more-objectively-correct-than-modern-human meta-ethical truths but definitely not precisely so).
If the above still looks like word salad to you… sigh please let me know so I can avoid pestering you ’til I’ve worked more on making my concepts and sentences clearer. (If it still looks way too much like word salad but you at least get the gist, that’d be good to know too.)
Is it not misleading to use the word ‘cooperation’ as you seem to be using it here?
Yes, it’s better to just say that there is probably some acausal morally relevant interaction, wherein the agents work on their own goals.
(I don’t understand what you were saying about time/causality. I disagree with Nesov_2009′s treatment of preference as magical substance inherent in parts of things.)
The following is an honest non-rhetorical question: Is it not misleading to use the word ‘cooperation’ as you seem to be using it here? Don’t you still get ‘cooperation’ in this sense if the subsets of agents are not causally interacting with each other (say) but have still semi-Platonicly ‘merged’ via implicit logical interaction as compared to some wider context of decision algorithms that by logical necessity exhibit comparatively less merging? This sets up a situation where an agent can (even accidentally) engineer ‘Pareto improvements’ just by improving its decision algorithm (or more precisely replacing ‘its’ decision algorithm (everywhere ‘it’ is instantiated, of course...) with a new one that has the relevant properties of a new, possibly very different logical reference class). It’s a total bastardization of the concept of trade but it seems to be enough to result in some acausal economy (er, that is, some positive-affect-laden mysterious timeless attractor simultaneously constructed and instantiated by timeful interaction) or ‘global cooperation’ as you put it, and yet despite all that timeless interaction there are many ways it could turn out that would not look to our flawed timeful minds like cooperation. I don’t trust my intuitions about what ‘cooperation’ would look like at levels of organization or intelligence much different from my own, so I’m hesitant to use the word.
(I realize this is ‘debating definitions’ but connotations matter a lot when everything is so fuzzily abstract and yet somewhat affect-laden, I think. And anyway I’m not sure I’m actually debating definitions because I might be missing an important property of Pareto improvements that makes their application to agents that are logical-property-shifting-over-time not only a useless analogy but a confused one.)
This question is partially prompted by your post about the use of the word ‘blackmail’ as if it was technically clear and not just intuitively clear which interactions are blackmail, trade, cooperation, et cetera, outside of human social perception (which is of course probably correlated with more-objectively-correct-than-modern-human meta-ethical truths but definitely not precisely so).
If the above still looks like word salad to you… sigh please let me know so I can avoid pestering you ’til I’ve worked more on making my concepts and sentences clearer. (If it still looks way too much like word salad but you at least get the gist, that’d be good to know too.)
Yes, it’s better to just say that there is probably some acausal morally relevant interaction, wherein the agents work on their own goals.
(I don’t understand what you were saying about time/causality. I disagree with Nesov_2009′s treatment of preference as magical substance inherent in parts of things.)