It really is that simple. Cooperate-Bots are losers.
Yes. Unless being cooperative makes more Cooperative bots (and not more defecting bots) than defecting makes rational bots (and not Cooperative bots) or used to do so and the vast majority of the population are still cooperative bots.
Evolution has in some specific circumstances made humans cooperate and be collectively better off in situations where rational agents with human values wouldn’t have. That’s the beauty of us being adaptation-executers, not fitness-maximizers.
A rational agent among humans could easily spend his time educating them about global warming, if the returns are high enough (I’m not talking about book revenues or payment for appearances or some irrational philanthropist paying him to do so, I’m actually talking about the returns of ameliorating the negative effects of global warming) and the costs low enough. That’s the interesting version of the debate about it being more “important” people know about global warming than tomatoes having genes.
A rational agent among irrational agents can actually be better off helping them cooperate and coordinate to avoid a specific situation in certain conditions rather than just plain old defecting.
I would add that adjective to agree with this sentence. Humans are agents, but they aren’t rational.
If your reread the sentence you may note that I was careful to make that adjective redundant—sufficiently redundant as to border on absurd. “A rational agent will X or be irrational” is just silly. “A rational agent will X” would have been true but misses the point when talking about humans. That’s why I chose to write “An agent will X or be irrational”.
Yes. Unless being cooperative makes more Cooperative bots (and not more defecting bots) than defecting makes rational bots (and not Cooperative bots)
No. Cooperating is different to being a Cooperate-Bot. A rational agent will cooperate when it will create a better outcome via, for example, making other people cooperate. A Cooperate-Bot will cooperate even when it creates bad outcomes and completely independently of the responses of other agents or their environment. The only situations where it can be expected for it to be better to be a Cooperate-Bot than a rational agent that chooses to cooperate are those contrived scenarios where an entity or the environment is specifically constructed to read the mind and motives of the agent and punish it for cooperating for rational reasons.
I don’t understand why you have gone through my various comments here to argue with trivially true statements. I was under the impression that I mostly joined the conversation agreeing with you.
A rational agent among irrational agents can actually be better off helping them cooperate and coordinate to avoid a specific situation in certain conditions rather than just plain old defecting.
Yes. When an agent can influence the behavior of other agents and cooperating in order to do so is of sufficient benefit it will cooperate in order to influence others. If this wasn’t the case we wouldn’t bother considering most of the game theoretic scenarios that we construct.
A Cooperate-Bot will cooperate even when it creates bad outcomes and completely independently of the responses of other agents or their environment.
That dosen’t mean they can’t win, as in being the only bots left standing. It is trivially easy to construct such situations. Obviously this won’t help the individuals.
I don’t understand why you have gone through my various comments here to argue with trivially true statements. I was under the impression that I mostly joined the conversation agreeing with you.
I wasn’t arguing with the statements. I think I even generally affirmed your comments at the start of my comments to avoid confusion. I was just emphasising that while this is settled the argument best version of the argument about the utility of trying to educate other people on global warming probably isn’t.
Also two comments don’t really seem like “going through several of your comments” in my eyes!
If your reread the sentence you may note that I was careful to make that adjective redundant—sufficiently redundant as to border on absurd. “A rational agent will X or be irrational” is just silly. “A rational agent will X” would have been true but misses the point when talking about humans. That’s why I chose to write “An agent will X or be irrational”.
Indeed, I obviously didn’t register the sentence properly, edited.
Yes. Unless being cooperative makes more Cooperative bots (and not more defecting bots) than defecting makes rational bots (and not Cooperative bots) or used to do so and the vast majority of the population are still cooperative bots.
Evolution has in some specific circumstances made humans cooperate and be collectively better off in situations where rational agents with human values wouldn’t have. That’s the beauty of us being adaptation-executers, not fitness-maximizers.
A rational agent among humans could easily spend his time educating them about global warming, if the returns are high enough (I’m not talking about book revenues or payment for appearances or some irrational philanthropist paying him to do so, I’m actually talking about the returns of ameliorating the negative effects of global warming) and the costs low enough. That’s the interesting version of the debate about it being more “important” people know about global warming than tomatoes having genes.
A rational agent among irrational agents can actually be better off helping them cooperate and coordinate to avoid a specific situation in certain conditions rather than just plain old defecting.
If your reread the sentence you may note that I was careful to make that adjective redundant—sufficiently redundant as to border on absurd. “A rational agent will X or be irrational” is just silly. “A rational agent will X” would have been true but misses the point when talking about humans. That’s why I chose to write “An agent will X or be irrational”.
No. Cooperating is different to being a Cooperate-Bot. A rational agent will cooperate when it will create a better outcome via, for example, making other people cooperate. A Cooperate-Bot will cooperate even when it creates bad outcomes and completely independently of the responses of other agents or their environment. The only situations where it can be expected for it to be better to be a Cooperate-Bot than a rational agent that chooses to cooperate are those contrived scenarios where an entity or the environment is specifically constructed to read the mind and motives of the agent and punish it for cooperating for rational reasons.
I don’t understand why you have gone through my various comments here to argue with trivially true statements. I was under the impression that I mostly joined the conversation agreeing with you.
Yes. When an agent can influence the behavior of other agents and cooperating in order to do so is of sufficient benefit it will cooperate in order to influence others. If this wasn’t the case we wouldn’t bother considering most of the game theoretic scenarios that we construct.
That dosen’t mean they can’t win, as in being the only bots left standing. It is trivially easy to construct such situations. Obviously this won’t help the individuals.
I wasn’t arguing with the statements. I think I even generally affirmed your comments at the start of my comments to avoid confusion. I was just emphasising that while this is settled the argument best version of the argument about the utility of trying to educate other people on global warming probably isn’t.
Also two comments don’t really seem like “going through several of your comments” in my eyes!
Indeed, I obviously didn’t register the sentence properly, edited.