Obviously the relevant difference is in their decision metrics. But human decision algorithms, sloppy and inconsistent though they are, are in some significant cases isomorphic to TDT.
If we were both defecting in the Prisoner’s dilemma, and then I read some of the sequences and thought that we were both similar decisionmakers and stopped defecting, it would be transparently stupid if you hadn’t also been exposed to the same information that led me to make the decision in the first place. If I knew you had also read it, I would want to calculate the expected value of defecting or cooperating given the relative utilities of the possible outcomes and the likelihood that your decision would correspond to my own.
I think you’re assuming much sloppier reasoning on my part than is actually the case (of course I probably have a bias in favor of thinking I’m not engaging in sloppy reasoning, but your comment isn’t addressing my actual position.) Do I think that if I engage in conservation efforts, this will be associated with a significant increase in likelihood that we won’t experience catastrophic climate change? Absolutely not. Those conservation efforts I engage in are almost entirely for the purpose of signalling credibility to other environmentalists (I say “other” but it’s difficult to find anyone who identifies as an environmentalist who shares my outlook,) and I am completely aware of this. However, the utility cost of informing oneself about a potential tragedy of commons where the information is readily available and heavily promoted, at least to a basic level, is extremely low, and humans have a record of resolving some types of tragedies of commons (although certainly not all,) and the more people who’re aware of and care about the issue, the greater the chance of the population resolving it (they practically never will of their own volition, but they will be more likely to support leaders who take it seriously and not defect from policies that address it and so on.) The expected utility has to be very low to not overcome the minimal cost of informing themselves. And of course, you have to include the signalling value of being informed (when you’re informed you can still signal that you don’t think the issue is a good time investment, or significant at all, but you can also signal your familiarity.)
I think that those environmentalists who expect that by public campaigning and grassroots action we can reduce climate change to manageable levels are being unreasonably naive; they’re trying to get the best results they can with their pocket change when what they need is three times their life savings. Sufficient levels of cooperation could resolve the matter simply enough, but people simply don’t work like that. To completely overextend the warfare metaphor, I think that if we’ve got a shot of not facing catastrophe, it’s not going to look like everyone pulling together and giving their best efforts to fight the barbarians, it’s going to look more like someone coming forward and saying “If you put me in charge I’m going to collect some resources from all of you, and we’re going to use them to make a nuclear bomb according to this plan these guys worked out.” Whether the society succeeds or not will hinge on proliferation of information and how seriously the public takes the issue, whether they mostly say things like “sounds like a good plan, I’m in,” and “if it’s really our best chance,” rather than “I care more about the issues on Leader B’s platform” and “I have no idea what that even means.”
Obviously the relevant difference is in their decision metrics. But human decision algorithms, sloppy and inconsistent though they are, are in some significant cases isomorphic to TDT.
If we were both defecting in the Prisoner’s dilemma, and then I read some of the sequences and thought that we were both similar decisionmakers and stopped defecting, it would be transparently stupid if you hadn’t also been exposed to the same information that led me to make the decision in the first place. If I knew you had also read it, I would want to calculate the expected value of defecting or cooperating given the relative utilities of the possible outcomes and the likelihood that your decision would correspond to my own.
I think you’re assuming much sloppier reasoning on my part than is actually the case (of course I probably have a bias in favor of thinking I’m not engaging in sloppy reasoning, but your comment isn’t addressing my actual position.) Do I think that if I engage in conservation efforts, this will be associated with a significant increase in likelihood that we won’t experience catastrophic climate change? Absolutely not. Those conservation efforts I engage in are almost entirely for the purpose of signalling credibility to other environmentalists (I say “other” but it’s difficult to find anyone who identifies as an environmentalist who shares my outlook,) and I am completely aware of this. However, the utility cost of informing oneself about a potential tragedy of commons where the information is readily available and heavily promoted, at least to a basic level, is extremely low, and humans have a record of resolving some types of tragedies of commons (although certainly not all,) and the more people who’re aware of and care about the issue, the greater the chance of the population resolving it (they practically never will of their own volition, but they will be more likely to support leaders who take it seriously and not defect from policies that address it and so on.) The expected utility has to be very low to not overcome the minimal cost of informing themselves. And of course, you have to include the signalling value of being informed (when you’re informed you can still signal that you don’t think the issue is a good time investment, or significant at all, but you can also signal your familiarity.)
I think that those environmentalists who expect that by public campaigning and grassroots action we can reduce climate change to manageable levels are being unreasonably naive; they’re trying to get the best results they can with their pocket change when what they need is three times their life savings. Sufficient levels of cooperation could resolve the matter simply enough, but people simply don’t work like that. To completely overextend the warfare metaphor, I think that if we’ve got a shot of not facing catastrophe, it’s not going to look like everyone pulling together and giving their best efforts to fight the barbarians, it’s going to look more like someone coming forward and saying “If you put me in charge I’m going to collect some resources from all of you, and we’re going to use them to make a nuclear bomb according to this plan these guys worked out.” Whether the society succeeds or not will hinge on proliferation of information and how seriously the public takes the issue, whether they mostly say things like “sounds like a good plan, I’m in,” and “if it’s really our best chance,” rather than “I care more about the issues on Leader B’s platform” and “I have no idea what that even means.”