Again you have jumped back to what benefits society—nobody questions the idea that it is bad for the society if the population doesn’t know Commons_Threat_X. Nobody questions the idea that it is bad for an individual if everyone else doesn’t know about Commons_Threat_X. What you seem unwilling to concede is that there is negligible benefit to that same individual for him, personally learning about Commons_Threat_X (in a population that isn’t remotely made up of TDT agents).
In a population of sufficiently different agents, yes. If you’re in a population where you can rationally say “I don’t know whether this is a credible threat or not, but no matter how credible threat it is, my learning about it and taking it seriously will not be associated with other people learning about it or taking it seriously,” then there’s no benefit in learning about Commons Threat X. If you assume no association of your action with other people’s, there’s no point in learning about any commons threat ever. But people’s actions are sufficiently associated to resolve some tragedies of commons, so in general when people are facing commons problems, it will tend to be to their benefit to make themselves aware of them when the information is made readily available.
Any agent encountering a scenario that can be legitimately described as commons problem with a large population of humans will either defect or be irrational. It really is that simple. Cooperate-Bots are losers.
(Note that agents with actually altruistic preferences are a whole different question.)
It really is that simple. Cooperate-Bots are losers.
Yes. Unless being cooperative makes more Cooperative bots (and not more defecting bots) than defecting makes rational bots (and not Cooperative bots) or used to do so and the vast majority of the population are still cooperative bots.
Evolution has in some specific circumstances made humans cooperate and be collectively better off in situations where rational agents with human values wouldn’t have. That’s the beauty of us being adaptation-executers, not fitness-maximizers.
A rational agent among humans could easily spend his time educating them about global warming, if the returns are high enough (I’m not talking about book revenues or payment for appearances or some irrational philanthropist paying him to do so, I’m actually talking about the returns of ameliorating the negative effects of global warming) and the costs low enough. That’s the interesting version of the debate about it being more “important” people know about global warming than tomatoes having genes.
A rational agent among irrational agents can actually be better off helping them cooperate and coordinate to avoid a specific situation in certain conditions rather than just plain old defecting.
I would add that adjective to agree with this sentence. Humans are agents, but they aren’t rational.
If your reread the sentence you may note that I was careful to make that adjective redundant—sufficiently redundant as to border on absurd. “A rational agent will X or be irrational” is just silly. “A rational agent will X” would have been true but misses the point when talking about humans. That’s why I chose to write “An agent will X or be irrational”.
Yes. Unless being cooperative makes more Cooperative bots (and not more defecting bots) than defecting makes rational bots (and not Cooperative bots)
No. Cooperating is different to being a Cooperate-Bot. A rational agent will cooperate when it will create a better outcome via, for example, making other people cooperate. A Cooperate-Bot will cooperate even when it creates bad outcomes and completely independently of the responses of other agents or their environment. The only situations where it can be expected for it to be better to be a Cooperate-Bot than a rational agent that chooses to cooperate are those contrived scenarios where an entity or the environment is specifically constructed to read the mind and motives of the agent and punish it for cooperating for rational reasons.
I don’t understand why you have gone through my various comments here to argue with trivially true statements. I was under the impression that I mostly joined the conversation agreeing with you.
A rational agent among irrational agents can actually be better off helping them cooperate and coordinate to avoid a specific situation in certain conditions rather than just plain old defecting.
Yes. When an agent can influence the behavior of other agents and cooperating in order to do so is of sufficient benefit it will cooperate in order to influence others. If this wasn’t the case we wouldn’t bother considering most of the game theoretic scenarios that we construct.
A Cooperate-Bot will cooperate even when it creates bad outcomes and completely independently of the responses of other agents or their environment.
That dosen’t mean they can’t win, as in being the only bots left standing. It is trivially easy to construct such situations. Obviously this won’t help the individuals.
I don’t understand why you have gone through my various comments here to argue with trivially true statements. I was under the impression that I mostly joined the conversation agreeing with you.
I wasn’t arguing with the statements. I think I even generally affirmed your comments at the start of my comments to avoid confusion. I was just emphasising that while this is settled the argument best version of the argument about the utility of trying to educate other people on global warming probably isn’t.
Also two comments don’t really seem like “going through several of your comments” in my eyes!
If your reread the sentence you may note that I was careful to make that adjective redundant—sufficiently redundant as to border on absurd. “A rational agent will X or be irrational” is just silly. “A rational agent will X” would have been true but misses the point when talking about humans. That’s why I chose to write “An agent will X or be irrational”.
Indeed, I obviously didn’t register the sentence properly, edited.
Desrtopa, can we be careful with it means to be “different” from other agents? Without being careful, we might reach for any old intuitive metric. But it’s not enough to be mentally similar to other agents across just any metric. For your reasoning to work, they have to be executing the same decision rule. That’s the metric that matters here.
Suppose we start out identical but NOT reasoning as per TDT—we defect in the prisoner’s dilemma, say—but then you read some LW and modify your decision rule so that when deciding what to do, you imagine that you’re deciding for both of us, since we’re so similar after all. Well, that’s not going to work too well, is it? My behavior isn’t going to change any, since, after all, you can’t actually influence it by your own reading about TDT.
So don’t be so quick to cast your faith in TDT reasoning. Everyone can look very similar in every respect EXCEPT the one that matters, namely whether they are using TDT reasoning.
With this in mind, if you reread the bayesians versus barbarians post you linked to, you should be able to see that it has the feel more of an existence proof of a cooperate-cooperate equilibrium. It does not say that we will necessarily find ourselves in such an equilibrium just by virtue of being sufficiently similar.
Obviously the relevant difference is in their decision metrics. But human decision algorithms, sloppy and inconsistent though they are, are in some significant cases isomorphic to TDT.
If we were both defecting in the Prisoner’s dilemma, and then I read some of the sequences and thought that we were both similar decisionmakers and stopped defecting, it would be transparently stupid if you hadn’t also been exposed to the same information that led me to make the decision in the first place. If I knew you had also read it, I would want to calculate the expected value of defecting or cooperating given the relative utilities of the possible outcomes and the likelihood that your decision would correspond to my own.
I think you’re assuming much sloppier reasoning on my part than is actually the case (of course I probably have a bias in favor of thinking I’m not engaging in sloppy reasoning, but your comment isn’t addressing my actual position.) Do I think that if I engage in conservation efforts, this will be associated with a significant increase in likelihood that we won’t experience catastrophic climate change? Absolutely not. Those conservation efforts I engage in are almost entirely for the purpose of signalling credibility to other environmentalists (I say “other” but it’s difficult to find anyone who identifies as an environmentalist who shares my outlook,) and I am completely aware of this. However, the utility cost of informing oneself about a potential tragedy of commons where the information is readily available and heavily promoted, at least to a basic level, is extremely low, and humans have a record of resolving some types of tragedies of commons (although certainly not all,) and the more people who’re aware of and care about the issue, the greater the chance of the population resolving it (they practically never will of their own volition, but they will be more likely to support leaders who take it seriously and not defect from policies that address it and so on.) The expected utility has to be very low to not overcome the minimal cost of informing themselves. And of course, you have to include the signalling value of being informed (when you’re informed you can still signal that you don’t think the issue is a good time investment, or significant at all, but you can also signal your familiarity.)
I think that those environmentalists who expect that by public campaigning and grassroots action we can reduce climate change to manageable levels are being unreasonably naive; they’re trying to get the best results they can with their pocket change when what they need is three times their life savings. Sufficient levels of cooperation could resolve the matter simply enough, but people simply don’t work like that. To completely overextend the warfare metaphor, I think that if we’ve got a shot of not facing catastrophe, it’s not going to look like everyone pulling together and giving their best efforts to fight the barbarians, it’s going to look more like someone coming forward and saying “If you put me in charge I’m going to collect some resources from all of you, and we’re going to use them to make a nuclear bomb according to this plan these guys worked out.” Whether the society succeeds or not will hinge on proliferation of information and how seriously the public takes the issue, whether they mostly say things like “sounds like a good plan, I’m in,” and “if it’s really our best chance,” rather than “I care more about the issues on Leader B’s platform” and “I have no idea what that even means.”
Again you have jumped back to what benefits society—nobody questions the idea that it is bad for the society if the population doesn’t know Commons_Threat_X. Nobody questions the idea that it is bad for an individual if everyone else doesn’t know about Commons_Threat_X. What you seem unwilling to concede is that there is negligible benefit to that same individual for him, personally learning about Commons_Threat_X (in a population that isn’t remotely made up of TDT agents).
In a population of sufficiently different agents, yes. If you’re in a population where you can rationally say “I don’t know whether this is a credible threat or not, but no matter how credible threat it is, my learning about it and taking it seriously will not be associated with other people learning about it or taking it seriously,” then there’s no benefit in learning about Commons Threat X. If you assume no association of your action with other people’s, there’s no point in learning about any commons threat ever. But people’s actions are sufficiently associated to resolve some tragedies of commons, so in general when people are facing commons problems, it will tend to be to their benefit to make themselves aware of them when the information is made readily available.
Any agent encountering a scenario that can be legitimately described as commons problem with a large population of humans will either defect or be irrational. It really is that simple. Cooperate-Bots are losers.
(Note that agents with actually altruistic preferences are a whole different question.)
Yes. Unless being cooperative makes more Cooperative bots (and not more defecting bots) than defecting makes rational bots (and not Cooperative bots) or used to do so and the vast majority of the population are still cooperative bots.
Evolution has in some specific circumstances made humans cooperate and be collectively better off in situations where rational agents with human values wouldn’t have. That’s the beauty of us being adaptation-executers, not fitness-maximizers.
A rational agent among humans could easily spend his time educating them about global warming, if the returns are high enough (I’m not talking about book revenues or payment for appearances or some irrational philanthropist paying him to do so, I’m actually talking about the returns of ameliorating the negative effects of global warming) and the costs low enough. That’s the interesting version of the debate about it being more “important” people know about global warming than tomatoes having genes.
A rational agent among irrational agents can actually be better off helping them cooperate and coordinate to avoid a specific situation in certain conditions rather than just plain old defecting.
If your reread the sentence you may note that I was careful to make that adjective redundant—sufficiently redundant as to border on absurd. “A rational agent will X or be irrational” is just silly. “A rational agent will X” would have been true but misses the point when talking about humans. That’s why I chose to write “An agent will X or be irrational”.
No. Cooperating is different to being a Cooperate-Bot. A rational agent will cooperate when it will create a better outcome via, for example, making other people cooperate. A Cooperate-Bot will cooperate even when it creates bad outcomes and completely independently of the responses of other agents or their environment. The only situations where it can be expected for it to be better to be a Cooperate-Bot than a rational agent that chooses to cooperate are those contrived scenarios where an entity or the environment is specifically constructed to read the mind and motives of the agent and punish it for cooperating for rational reasons.
I don’t understand why you have gone through my various comments here to argue with trivially true statements. I was under the impression that I mostly joined the conversation agreeing with you.
Yes. When an agent can influence the behavior of other agents and cooperating in order to do so is of sufficient benefit it will cooperate in order to influence others. If this wasn’t the case we wouldn’t bother considering most of the game theoretic scenarios that we construct.
That dosen’t mean they can’t win, as in being the only bots left standing. It is trivially easy to construct such situations. Obviously this won’t help the individuals.
I wasn’t arguing with the statements. I think I even generally affirmed your comments at the start of my comments to avoid confusion. I was just emphasising that while this is settled the argument best version of the argument about the utility of trying to educate other people on global warming probably isn’t.
Also two comments don’t really seem like “going through several of your comments” in my eyes!
Indeed, I obviously didn’t register the sentence properly, edited.
Desrtopa, can we be careful with it means to be “different” from other agents? Without being careful, we might reach for any old intuitive metric. But it’s not enough to be mentally similar to other agents across just any metric. For your reasoning to work, they have to be executing the same decision rule. That’s the metric that matters here.
Suppose we start out identical but NOT reasoning as per TDT—we defect in the prisoner’s dilemma, say—but then you read some LW and modify your decision rule so that when deciding what to do, you imagine that you’re deciding for both of us, since we’re so similar after all. Well, that’s not going to work too well, is it? My behavior isn’t going to change any, since, after all, you can’t actually influence it by your own reading about TDT.
So don’t be so quick to cast your faith in TDT reasoning. Everyone can look very similar in every respect EXCEPT the one that matters, namely whether they are using TDT reasoning.
With this in mind, if you reread the bayesians versus barbarians post you linked to, you should be able to see that it has the feel more of an existence proof of a cooperate-cooperate equilibrium. It does not say that we will necessarily find ourselves in such an equilibrium just by virtue of being sufficiently similar.
Obviously the relevant difference is in their decision metrics. But human decision algorithms, sloppy and inconsistent though they are, are in some significant cases isomorphic to TDT.
If we were both defecting in the Prisoner’s dilemma, and then I read some of the sequences and thought that we were both similar decisionmakers and stopped defecting, it would be transparently stupid if you hadn’t also been exposed to the same information that led me to make the decision in the first place. If I knew you had also read it, I would want to calculate the expected value of defecting or cooperating given the relative utilities of the possible outcomes and the likelihood that your decision would correspond to my own.
I think you’re assuming much sloppier reasoning on my part than is actually the case (of course I probably have a bias in favor of thinking I’m not engaging in sloppy reasoning, but your comment isn’t addressing my actual position.) Do I think that if I engage in conservation efforts, this will be associated with a significant increase in likelihood that we won’t experience catastrophic climate change? Absolutely not. Those conservation efforts I engage in are almost entirely for the purpose of signalling credibility to other environmentalists (I say “other” but it’s difficult to find anyone who identifies as an environmentalist who shares my outlook,) and I am completely aware of this. However, the utility cost of informing oneself about a potential tragedy of commons where the information is readily available and heavily promoted, at least to a basic level, is extremely low, and humans have a record of resolving some types of tragedies of commons (although certainly not all,) and the more people who’re aware of and care about the issue, the greater the chance of the population resolving it (they practically never will of their own volition, but they will be more likely to support leaders who take it seriously and not defect from policies that address it and so on.) The expected utility has to be very low to not overcome the minimal cost of informing themselves. And of course, you have to include the signalling value of being informed (when you’re informed you can still signal that you don’t think the issue is a good time investment, or significant at all, but you can also signal your familiarity.)
I think that those environmentalists who expect that by public campaigning and grassroots action we can reduce climate change to manageable levels are being unreasonably naive; they’re trying to get the best results they can with their pocket change when what they need is three times their life savings. Sufficient levels of cooperation could resolve the matter simply enough, but people simply don’t work like that. To completely overextend the warfare metaphor, I think that if we’ve got a shot of not facing catastrophe, it’s not going to look like everyone pulling together and giving their best efforts to fight the barbarians, it’s going to look more like someone coming forward and saying “If you put me in charge I’m going to collect some resources from all of you, and we’re going to use them to make a nuclear bomb according to this plan these guys worked out.” Whether the society succeeds or not will hinge on proliferation of information and how seriously the public takes the issue, whether they mostly say things like “sounds like a good plan, I’m in,” and “if it’s really our best chance,” rather than “I care more about the issues on Leader B’s platform” and “I have no idea what that even means.”