If you think evolution has a utility function, and that it’s the SAME function that an agent formed by an evolutionary process has, you’re not likely to get me to follow you down any experimental or reasoning path. And if you think this utility function is “perfectly selfish”, you’ve got EVEN MORE work cut out in defining terms, because those just don’t mean what I think you want them to.
Empathy as a heuristic to enable cooperation is easy to understand, but when normatively modeling things, you have to deconstruct the heuristics to actual goals and strategies.
Take a step back and try rereading what I wrote in a charitable light, because it appears you have completely misconstrued what I was saying.
A major part of the “cooperation” involved here is in being able to cooperate with yourself. In an environment with a well-mixed group of bots each employing differing strategies, and some kind of reproductive rule (if you have 100 utility, say, spawn a copy of yourself), Cooperate-bots are unlikely to be terribly prolific; they lose out against many other bots.
In such an environment, a strategem of defecting against bots that defect against cooperate-bot is a -cheap- mechanism of coordination; you can coordinate with other “Selfish Altruist” bots, and cooperate with them, but you don’t take a whole lot of hits from failing to edit: defect against cooperate-bot. Additionally, you’re unlikely to run up against very many bots that cooperate with cooperate-bot, but defect against you. As a coordination strategy, it is therefore inexpensive.
And if “computation time” is considered as an expense against utility, which I think reasonably should be the case, you’re doing a relatively good job minimizing this; you have to perform exactly one prediction of what another bot will do. I did mention this was a factor.
If you think evolution has a utility function, and that it’s the SAME function that an agent formed by an evolutionary process has, you’re not likely to get me to follow you down any experimental or reasoning path. And if you think this utility function is “perfectly selfish”, you’ve got EVEN MORE work cut out in defining terms, because those just don’t mean what I think you want them to.
Empathy as a heuristic to enable cooperation is easy to understand, but when normatively modeling things, you have to deconstruct the heuristics to actual goals and strategies.
Take a step back and try rereading what I wrote in a charitable light, because it appears you have completely misconstrued what I was saying.
A major part of the “cooperation” involved here is in being able to cooperate with yourself. In an environment with a well-mixed group of bots each employing differing strategies, and some kind of reproductive rule (if you have 100 utility, say, spawn a copy of yourself), Cooperate-bots are unlikely to be terribly prolific; they lose out against many other bots.
In such an environment, a strategem of defecting against bots that defect against cooperate-bot is a -cheap- mechanism of coordination; you can coordinate with other “Selfish Altruist” bots, and cooperate with them, but you don’t take a whole lot of hits from failing to edit: defect against cooperate-bot. Additionally, you’re unlikely to run up against very many bots that cooperate with cooperate-bot, but defect against you. As a coordination strategy, it is therefore inexpensive.
And if “computation time” is considered as an expense against utility, which I think reasonably should be the case, you’re doing a relatively good job minimizing this; you have to perform exactly one prediction of what another bot will do. I did mention this was a factor.