I agree. Desrtopa is taking Eliezer’s barbarians post too far for a number of reasons.
1) Eliezer’s decision theory is at the least controversial which means many people here may not agree with it.
2) Even if they agree with it, it doesn’t mean they have attained rationality in Eliezer’s sense.
3) Even if they have attained this sort of rationality, we are but a small community, and the rest of the world is still not going to cooperate with us. Our attempts to cooperate with them will be impotent.
Desrtopa: Just because it upholds an ideal of rationality that supports cooperation, does not mean we have attained that ideal. Again, the question is not what you’d like to be true, but about what’s actually true. If you’re still shocked by people’s low confidence in global warming, it’s time to consider the possibility that your model of the world—one in which people are running around executing TDT—is wrong.
Desrtopa is taking Eliezer’s barbarians post too far for a number of reasons.
Those are all good reasons but as far as I can tell Desrtopa would probably give the right answer if questioned about any of those. He seems to be aware of how people actually behave (not remotely TDTish) but this gets overridden by a flashing neon light saying “Rah Cooperation!”.
There are plenty of ways in which I personally avoid cooperation for my own benefit. But in general I think that a personal policy of not informing oneself at even a basic level about tragedies of commons where the information is readily available is not beneficial, because humans have a sufficiently developed propensity for resolving tragedies of commons to give at least the most basic information marginal benefit.
To me, this comment basically concedes that you’re wrong but attempts to disguise it in a face-saving way. If you could have said that people should be informing themselves at the socially optimal level, as you’ve been implying with your TDT arguments above, you would have. Instead, you backed off and said that people ought to be informing themselves at least a little.
Just to be sure, let me rewrite your claim precisely, in the sense you must mean it given your supposed continued disagreement:
In general I think that a personal policy of not informing oneself at even a basic level about tragedies of commons where the information is readily available is not beneficial to the individual, because humans have a sufficiently developed propensity for resolving tragedies of commons to give at least the most basic information marginal benefit to the individual.
Assuming that’s what you’re saying, it’s easy to see that even this is an overreach. The question on the table is whether people should be informing themselves about global warming. Whether the first epsilon of information one gets from “informing oneself” (as opposed to hearing the background noise) is beneficial to the individual relative to the cost of attaining it, is a question of derivatives of cost and benefit functions at zero, and it could go either way. You simply can’t make a general statement about how these derivatives relate for the class of Commons Problems. But more importantly, even if you could, SO WHAT? The question is not whether people should be informing themselves a bit, the question is whether they should be informing themselves at anywhere close to the socially optimal level. And by admitting it’s a tragedy of the commons, we are already ANSWERING that question.
Does that make sense? Am I misunderstanding your position? Has your position changed?
To me, this comment basically concedes that you’re wrong but attempts to disguise it in a face-saving way.
It seems that you are trying to score points for winning the debate. If your interlocutor indeed condedes something in a face-saving way, forcing him to admit it is useless from the truth-seeking point of view.
prase, I really sympathize with that comment. I will be the first to admit that forcing people to concede their incorrectness is typically not the best way of getting them to agree on the truth. See for example this comment.
BUT! On this site we sort of have TWO goals when we argue, truth-seeking and meta-truth-seeking. Yes, we are trying to get closer to the truth on particular topics. But we’re also trying to make ourselves better at arguing and reasoning in general. We are trying to step back and notice what we’re doing, and correct flaws when they are exposed to our scrutiny.
If you look back over this debate, you will see me at several points deliberately stepping back and trying to be extremely clear about what I think is transpiring in the debate itself. I think that’s worth doing, on lesswrong.
To defend the particular sentence you quote: I know that when I was younger, it was entirely possible for me to “escape” from a debate in a face-saving way without realizing I had actually been wrong. I’m sure this still happens from time to time...and I want to know if it’s happening! I hope that LWers will point it out. On LW I think we ought to prioritize killing biases over saving faces.
I know that when I was younger, it was entirely possible for me to “escape” from a debate in a face-saving way without realizing I had actually been wrong. I’m sure this still happens from time to time...and I want to know if it’s happening! I hope that LWers will point it out.
The key question is: would you believe it if it were your opponent in a heated debate who told you?
I’d like to say yes, but I don’t really know. Am I way off-base here?
Probably the most realistic answer is that I would sometimes believe it, and sometimes not. If not often enough, it’s not worth it. It’s too bad there aren’t more people weighing in on these comments because I’d like to know how the community thinks my priorities should be set. In any case you’ve been around for longer so you probably know better than I.
Alice: “But Z is irrelevant with respect to X’, which is what I actually mean.”
Now, Bob agrees with X’. What will Bob say?
“Fine, we agree after all.”
“Yes, but remember that X is problematic and not entirely equivalent to X’.”
“You should openly admit that you were wrong with X.”
If I were in place of Alice, (1) would cause me to abandon X and believe X’ instead. For some time I would deny that they aren’t equivalent or think that my saying X was only poor formulation on my part and that I have always believed X’. Later, when I cease identifying with my past self too much, I would admit (at least to myself) that I have changed my opinion. (2) would have similar effects, with more resent directed at Bob. In case of (3) I would perhaps try to continue debating to win the lost points back by pointing out weak points of Bob’s opinions or debating style, and after calming down I would believe that Bob is a jerk and search hard to find reasons why Z is a bad argument. Eventually I would (hopefully) move to X’ too (I don’t like to believe things which are easily attacked), but it would take longer. I would certainly not admit my error on the spot.
(The above is based on memories of my reactions in several past debates, especially before I read about cognitive biases and such.)
Now, to tell how generalisable are our personal anecdotes, we should organise an experiment. Do you have any idea how to do it easily?
Later, when I cease identifying with my past self too much, I would admit (at least to myself) that I have changed my opinion.
I think the default is that people change specific opinions more in response to the tactful debate style you’re identifying, but are less likely to ever notice that they have in fact changed their opinion. I think explicitly noticing one’s wrongness on specific issues can be really beneficial in making a person less convinced of their rightness more globally, and therefore more willing to change their mind in general. My question is how we ought to balance these twin goals.
It would be much easier to get at the first effect by experiment than the second, since the latter is a much more long-term investment in noticing one’s biases more generally. And if we could get at both, we would still have to decide how much we care about one versus the other, on LW.
Personally I am becoming inclined to give up the second goal.
Since here on LW changing one’s opinion is considered a supreme virtue, I would even suspect that the long-term users are confabulating that they have changed their opinion when actually they didn’t. Anyway, a technique that might be useful is keeping detailed diaries of what one thinks and review them after few years (or, for that matter, look at what one has written on the internet few years ago). The downside is, of course, that writing beliefs down may make their holders even more entrenched.
The downside is, of course, that writing beliefs down may make their holders even more entrenched.
Entirely plausible—cognitive dissonance, public commitment, backfire effect, etc. Do you think this possibility negates the value, or are there effective counter-measures?
“it’s best if we notice and acknowledge when we’re wrong, and therefore I will do my best to notice and acknowledge when I’m wrong”
“it’s best if we notice and acknowledge when we’re wrong, and therefore I will upvote, praise, and otherwise reinforce such acknowledgements when I notice them” and
“it’s best if we notice and acknowledge when we’re wrong, and therefore I will downvote, criticize, and otherwise punish failure to do so.”
True in the immediate sense, but I disagree in the global sense that we should encourage face-saving on LW, since doing so will IMO penalize truth-seeking in general. Scoring points for winning the debate is a valid and important mechanism for reinforcing behaviors that lead to debate-winning, and should be allowed in situations where debate-winning correlates to truth-establishment in general, not just for the arguing parties.
This is also true in the immediate sense, but somehow implies that the debate-winning behaviours are a net positive with respect to truth seeking at least in some possible (non-negligibly frequent) circumstances. I find the claim dubious. Can you specify in what circumstances is the debate winning argumentation style superior to leaving a line of retreat?
Line of retreat is superior for convincing your debate partner, but debate-winning behavior may be superior for convincing uninvolved readers, because it encourages verbal admission of fault which makes it easier to discern the prevailing truth as a reader.
debate-winning behavior may be superior for convincing uninvolved readers, because it encourages verbal admission of fault
That isn’t actually the reason. The reason debate-winning behavior is superior for convincing bystanders is that it appeals to their natural desire to side with the status-gaining triumphant party. As such, it is a species of Dark Art.
This is what I am not sure about. I know that I will be more likely to admit being wrong when I have chance do do it in a face-saving way (this includes simply saying “you are right” when I am doing it voluntarily and the opponent has debated in a civillised way up to that point) than when my interlocutor tries to force me to do that. I know it but still can’t easily get rid of that bias.
There are several outcomes of a debate where one party is right and the other is wrong:
The wrong side admit their wrongness.
The wrong side don’t want to admit their wrongness but realise that they have no good arguments and drop from the debate.
The wrong side don’t want to admit their wrongness and still continue debating in hope of defeating the opponent or at least achieving a honourable draw.
The wrong side don’t even realise their wrongness.
The exact flavour of debate-winning behaviour I have criticised makes 2 difficult or impossible, consequently increasing probabilities of 1, 3 or 4. 1 is superior to 2 from almost any point of view, but 2 is similarly superior to 3 and 4 and it is far from clear whether the probability of 1 increases more than probabilities of 3 and 4 combined when 2 ceases to be an option, or whether it increases at all.
Or where both sides admit their wrongness and switch their opinions, or where a third side intervenes and bans them both for trolling. Next time I’ll try to compose a more exhaustive list.
Don’t forget the case where the two parties are talking at cross purposes (e.g. Alice means that a tree falling in a forest with no-one around generates no auditory sensations and Bob means that it does generate acoustic waves) but neither of them realizes that; it doesn’t even occur to each that the other might be meaning something else by sound. (I’m under the impression that this is relatively rare on LW, but it does constitute a sizeable fraction of all arguments I hear elsewhere, both online and in person.)
Yes, you are misunderstanding my position. I don’t think that it’s optimal for most individuals to inform themselves about global warming to a “socially optimal” level where everyone takes the issue sufficiently seriously to take grassroots action to resolve it. Human decisionmaking is only isomorphic to TDT in a limited domain and you can only expect so much association between your decisions and others; if you go that far, you’re putting in too much buck for not enough bang, unless you’re getting utility from the information in other ways. But at the point where you don’t have even basic knowledge of global warming, anticipating a negative marginal utility on informing yourself corresponds to a general policy of ignorance that will serve one poorly with respect to a large class of problems.
If there were no correlation between one person’s decisions and another’s, it would probably not be worth anyone’s time to learn about any sort of societal problems at all, but then, we wouldn’t have gotten to the point of being able to have societal problems in the first place.
Unfortunately that response did not convince me that I’m misunderstanding your position.
If people are not using a TDT decision rule, then your original explicit use of TDT reasoning was irrelevant and I don’t know why you would have invoked it at all unless you thought it was actually relevant. And you continue to imply at least a weaker form of that reasoning.
No one is disputing that there is correlation between people’s decisions. The problem is that correlation does not imply that TDT reasoning works! A little bit of correlation does not imply that TDT works a little bit. Unless people are similar to you AND using TDT, you don’t get to magically drag them along with you by choosing to cooperate.
This is a standard textbook tragedy of the commons problem, plain and simple. From where I’m standing I don’t see the relevance of anything else. If you want to continue disagreeing, can you directly tell me whether you think TDT is still relevant and why?
People don’t use a generalized form of TDT, but human decisionmaking is isomorphic to TDT in some domains. Other people don’t have to consciously be using TDT to sometimes make decisions based on a judgment of how likely it is that other people will behave similarly.
Tragedies of commons are not universally unresolvable. It’s to everyone’s advantage for everyone to pool their resources for some projects for the public good, but it’s also advantageous for each individual to opt out of contributing their resources. But under the institution of governments, we have sufficient incentives to prevent most people from opting out. Simply saying “It’s a tragedy of the commons problem” doesn’t mean there’s no chance of resolving it and therefore no use in knowing about it.
Well, take Stop Voting For Nincompoops, for example. If you were to just spontaneously decide “I’m going to vote for the candidate I really think best represents my principles in hope that that has a positive effect on the electoral process,” you have no business being surprised if barely anyone thinks the same thing and the gesture amounts to nothing. But if you read an essay encouraging you to do so, posted in a place where many people apply reasoning processes similar to your own, the choice you make is a lot more likely to reflect the choice a lot of other people are making.
It seems like this is an example of, at best, a domain on which decisionmaking could use TDT. No one is denying that people could use TDT, though. I was hoping for you to demonstrate an example where people actually seem to be behaving in accordance with TDT. (It is not enough to just argue that people reason fairly similarly in certain domains).
“Isomorphic” is a strong word. Let me know if you have a better example.
Anyway let me go back to this from your previous comment:
Tragedies of commons are not universally unresolvable....Simply saying “It’s a tragedy of the commons problem” doesn’t mean there’s no chance of resolving it and therefore no use in knowing about it.
No one is claiming tragedies of the commons are always unresolvable. We are claiming that unresolved tragedies of the commons are tragedies of the commons! You seem to be suggesting that knowledge is a special thing which enables us to possibly resolve tragedies of the commons and therefore we should seek it out. But in the context of global warming and the current discussion, knowledge-collection is the tragedy of the commons. To the extent that people are underincentivized to seek out knowledge, that is the commons problem we’re talking about.
If you turn around and say, “well they should be seeking out more knowledge because it could potentially resolve the tragedy”...well of course more knowledge could resolve the tragedy of not having enough knowledge, but you have conjured up your “should” from nowhere! The tragedy we’re discussing is what exists after rational individuals decide to gather exactly as much information as a rational agent “should,” where should is defined with respect to that agent’s preferences and the incentives he faces.
Final question: If TDT reasoning did magically get us to the level of informedness on global warming that you think we rationally should be attaining, and if we are not attaining that level of informedness, does that not imply that we aren’t using TDT reasoning? And if other people aren’t using TDT reasoning, does that not imply that it is NOT a good idea for me to start using it? You seem to think that TDT has something to do with how rational agents “should” behave here, but I just don’t see how TDT is relevant.
By “TDT reasoning”—I know, I know—I have been meaning Desrtopa’s use of “TDT reasoning,” which seems to be like TDT + [assumption that everyone else is using TDT].
I shouldn’t say that TDT is irrelevant, but really that it is a needless generalization in this context. I meant that Desrtopa’s invocation of TDT was irrelevant, in that it did nothing to fix the commons problem that we were initially discussing without mention of TDT.
You seem to be suggesting that knowledge is a special thing which enables us to possibly resolve tragedies of the commons and therefore we should seek it out. But in the context of global warming and the current discussion, knowledge-collection is the tragedy of the commons. To the extent that people are underincentivized to seek out knowledge, that is the commons problem we’re talking about.
Lack of knowledge of global warming isn’t the tragedy of the commons I’m talking about; even if everyone were informed about global warming, it doesn’t necessarily mean we’d resolve it. Humans can suffer from global climate change despite the entire population being informed about it, and we might find a way to resolve it that works despite most of the population being ignorant.
The question a person starting from a position of ignorance about climate change has to answer is “should I expect that learning about this issue has benefits to me in excess of the effort I’ll have to put in to learn about it?” An answer of “no” corresponds to a low general expectation of information value considering the high availability of the information.
The reason I brought up TDT was as an example of reasoning that relies on a correlation between one agent’s choices and another’s. I didn’t claim at any point that people are actually using TDT. However, if decision theory that assumes correlation between people’s decisions did not outcompete decision theory which does not assume any correlation, we wouldn’t have evolved cooperative tendencies in the first place.
If you were to just spontaneously decide “I’m going to vote for the candidate I really think best represents my principles in hope that that has a positive effect on the electoral process,” you have no business being surprised if barely anyone thinks the same thing and the gesture amounts to nothing.
Determining that the gesture amounts to less than the gesture of going in to the poll booth and voting for one of the two party lizards seems rather difficult.
Of course, it’s in practice nearly impossible for me to determine through introspection whether what feels like a “spontaneous” decision on my part is in fact being inspired by some set of external stimuli, and if so which stimuli. And without that data, it’s hard to predict the likelihood of other people being similarly inspired.
So I have no business being too surprised if lots of people do think the same thing, either, even if I can’t point to an inspirational essay in a community of similar reasoners as a mechanism.
In other words, sometimes collective shifts in attitude take hold in ways that feel entirely spontaneous (and sometimes inexplicably so) to the participants.
Those are all good reasons but as far as I can tell Desrtopa would probably give the right answer if questioned about any of those. He seems to be aware of how people actually behave (not remotely TDTish) but this gets overridden by a flashing neon light saying “Rah Cooperation!”.
He may be mistaken about how high trust the society he lives in is. This is something it is actually surprisingly easy to be wrong about, since our intuitions aren’t built for a society of hundreds of millions living across an entire continent, our minds don’t understand that our friends, family and co-workers are not a representative sample of the actual “tribe” we are living in.
He may be mistaken about how high trust the society he lives in is.
Even if that is the case he is still mistaken about game theory. While the ‘high trust society’ you describe would encourage cooperation to the extent that hypocrisy does not serve as a substitute the justifications Desrtopa is given are in terms of game theory and TDT. It relies on acting as if other agents are TDT agents when they are not—an entirely different issue to dealing with punishment norms by ‘high trust’ agents.
We are in agreement on that. But this might better explain why, on second thought, I think it dosen’t matter, at least not in this sense, on the issue of whether educating people about global warming matters.
I think we may have been arguing against a less than most charitable interpretation of his argument, which I think isn’t that topical a discussion (even if it serves to clear up a few misconceptions). If the less than charitable argument is the interpretation he now thinks or even actually did intend, dosen’t seem that relevant to me.
“rah cooperation” I think in practice translates into “I think I live in a high trust enough society that its useful to use this signal to get people to ameliorate this tragedy of the commons situation I’m concerned about.”
I agree. Desrtopa is taking Eliezer’s barbarians post too far for a number of reasons.
1) Eliezer’s decision theory is at the least controversial which means many people here may not agree with it.
2) Even if they agree with it, it doesn’t mean they have attained rationality in Eliezer’s sense.
3) Even if they have attained this sort of rationality, we are but a small community, and the rest of the world is still not going to cooperate with us. Our attempts to cooperate with them will be impotent.
Desrtopa: Just because it upholds an ideal of rationality that supports cooperation, does not mean we have attained that ideal. Again, the question is not what you’d like to be true, but about what’s actually true. If you’re still shocked by people’s low confidence in global warming, it’s time to consider the possibility that your model of the world—one in which people are running around executing TDT—is wrong.
Those are all good reasons but as far as I can tell Desrtopa would probably give the right answer if questioned about any of those. He seems to be aware of how people actually behave (not remotely TDTish) but this gets overridden by a flashing neon light saying “Rah Cooperation!”.
There are plenty of ways in which I personally avoid cooperation for my own benefit. But in general I think that a personal policy of not informing oneself at even a basic level about tragedies of commons where the information is readily available is not beneficial, because humans have a sufficiently developed propensity for resolving tragedies of commons to give at least the most basic information marginal benefit.
To me, this comment basically concedes that you’re wrong but attempts to disguise it in a face-saving way. If you could have said that people should be informing themselves at the socially optimal level, as you’ve been implying with your TDT arguments above, you would have. Instead, you backed off and said that people ought to be informing themselves at least a little.
Just to be sure, let me rewrite your claim precisely, in the sense you must mean it given your supposed continued disagreement:
Assuming that’s what you’re saying, it’s easy to see that even this is an overreach. The question on the table is whether people should be informing themselves about global warming. Whether the first epsilon of information one gets from “informing oneself” (as opposed to hearing the background noise) is beneficial to the individual relative to the cost of attaining it, is a question of derivatives of cost and benefit functions at zero, and it could go either way. You simply can’t make a general statement about how these derivatives relate for the class of Commons Problems. But more importantly, even if you could, SO WHAT? The question is not whether people should be informing themselves a bit, the question is whether they should be informing themselves at anywhere close to the socially optimal level. And by admitting it’s a tragedy of the commons, we are already ANSWERING that question.
Does that make sense? Am I misunderstanding your position? Has your position changed?
It seems that you are trying to score points for winning the debate. If your interlocutor indeed condedes something in a face-saving way, forcing him to admit it is useless from the truth-seeking point of view.
prase, I really sympathize with that comment. I will be the first to admit that forcing people to concede their incorrectness is typically not the best way of getting them to agree on the truth. See for example this comment.
BUT! On this site we sort of have TWO goals when we argue, truth-seeking and meta-truth-seeking. Yes, we are trying to get closer to the truth on particular topics. But we’re also trying to make ourselves better at arguing and reasoning in general. We are trying to step back and notice what we’re doing, and correct flaws when they are exposed to our scrutiny.
If you look back over this debate, you will see me at several points deliberately stepping back and trying to be extremely clear about what I think is transpiring in the debate itself. I think that’s worth doing, on lesswrong.
To defend the particular sentence you quote: I know that when I was younger, it was entirely possible for me to “escape” from a debate in a face-saving way without realizing I had actually been wrong. I’m sure this still happens from time to time...and I want to know if it’s happening! I hope that LWers will point it out. On LW I think we ought to prioritize killing biases over saving faces.
The key question is: would you believe it if it were your opponent in a heated debate who told you?
I’d like to say yes, but I don’t really know. Am I way off-base here?
Probably the most realistic answer is that I would sometimes believe it, and sometimes not. If not often enough, it’s not worth it. It’s too bad there aren’t more people weighing in on these comments because I’d like to know how the community thinks my priorities should be set. In any case you’ve been around for longer so you probably know better than I.
I think we are speaking about this scenario:
Alice says: “X is true.”
Bob: “No, X is false, because of Z.”
Alice: “But Z is irrelevant with respect to X’, which is what I actually mean.”
Now, Bob agrees with X’. What will Bob say?
“Fine, we agree after all.”
“Yes, but remember that X is problematic and not entirely equivalent to X’.”
“You should openly admit that you were wrong with X.”
If I were in place of Alice, (1) would cause me to abandon X and believe X’ instead. For some time I would deny that they aren’t equivalent or think that my saying X was only poor formulation on my part and that I have always believed X’. Later, when I cease identifying with my past self too much, I would admit (at least to myself) that I have changed my opinion. (2) would have similar effects, with more resent directed at Bob. In case of (3) I would perhaps try to continue debating to win the lost points back by pointing out weak points of Bob’s opinions or debating style, and after calming down I would believe that Bob is a jerk and search hard to find reasons why Z is a bad argument. Eventually I would (hopefully) move to X’ too (I don’t like to believe things which are easily attacked), but it would take longer. I would certainly not admit my error on the spot.
(The above is based on memories of my reactions in several past debates, especially before I read about cognitive biases and such.)
Now, to tell how generalisable are our personal anecdotes, we should organise an experiment. Do you have any idea how to do it easily?
I think the default is that people change specific opinions more in response to the tactful debate style you’re identifying, but are less likely to ever notice that they have in fact changed their opinion. I think explicitly noticing one’s wrongness on specific issues can be really beneficial in making a person less convinced of their rightness more globally, and therefore more willing to change their mind in general. My question is how we ought to balance these twin goals.
It would be much easier to get at the first effect by experiment than the second, since the latter is a much more long-term investment in noticing one’s biases more generally. And if we could get at both, we would still have to decide how much we care about one versus the other, on LW.
Personally I am becoming inclined to give up the second goal.
Since here on LW changing one’s opinion is considered a supreme virtue, I would even suspect that the long-term users are confabulating that they have changed their opinion when actually they didn’t. Anyway, a technique that might be useful is keeping detailed diaries of what one thinks and review them after few years (or, for that matter, look at what one has written on the internet few years ago). The downside is, of course, that writing beliefs down may make their holders even more entrenched.
Entirely plausible—cognitive dissonance, public commitment, backfire effect, etc. Do you think this possibility negates the value, or are there effective counter-measures?
I don’t think I have an idea how strong all relevant effects and measures are.
There’s a big difference between:
“it’s best if we notice and acknowledge when we’re wrong, and therefore I will do my best to notice and acknowledge when I’m wrong”
“it’s best if we notice and acknowledge when we’re wrong, and therefore I will upvote, praise, and otherwise reinforce such acknowledgements when I notice them”
and
“it’s best if we notice and acknowledge when we’re wrong, and therefore I will downvote, criticize, and otherwise punish failure to do so.”
True in the immediate sense, but I disagree in the global sense that we should encourage face-saving on LW, since doing so will IMO penalize truth-seeking in general. Scoring points for winning the debate is a valid and important mechanism for reinforcing behaviors that lead to debate-winning, and should be allowed in situations where debate-winning correlates to truth-establishment in general, not just for the arguing parties.
This is also true in the immediate sense, but somehow implies that the debate-winning behaviours are a net positive with respect to truth seeking at least in some possible (non-negligibly frequent) circumstances. I find the claim dubious. Can you specify in what circumstances is the debate winning argumentation style superior to leaving a line of retreat?
Line of retreat is superior for convincing your debate partner, but debate-winning behavior may be superior for convincing uninvolved readers, because it encourages verbal admission of fault which makes it easier to discern the prevailing truth as a reader.
That isn’t actually the reason. The reason debate-winning behavior is superior for convincing bystanders is that it appeals to their natural desire to side with the status-gaining triumphant party. As such, it is a species of Dark Art.
This is what I am not sure about. I know that I will be more likely to admit being wrong when I have chance do do it in a face-saving way (this includes simply saying “you are right” when I am doing it voluntarily and the opponent has debated in a civillised way up to that point) than when my interlocutor tries to force me to do that. I know it but still can’t easily get rid of that bias.
There are several outcomes of a debate where one party is right and the other is wrong:
The wrong side admit their wrongness.
The wrong side don’t want to admit their wrongness but realise that they have no good arguments and drop from the debate.
The wrong side don’t want to admit their wrongness and still continue debating in hope of defeating the opponent or at least achieving a honourable draw.
The wrong side don’t even realise their wrongness.
The exact flavour of debate-winning behaviour I have criticised makes 2 difficult or impossible, consequently increasing probabilities of 1, 3 or 4. 1 is superior to 2 from almost any point of view, but 2 is similarly superior to 3 and 4 and it is far from clear whether the probability of 1 increases more than probabilities of 3 and 4 combined when 2 ceases to be an option, or whether it increases at all.
You left off all the cases where the right side admits their wrongess!
Or where both sides admit their wrongness and switch their opinions, or where a third side intervenes and bans them both for trolling. Next time I’ll try to compose a more exhaustive list.
Don’t forget the case where the two parties are talking at cross purposes (e.g. Alice means that a tree falling in a forest with no-one around generates no auditory sensations and Bob means that it does generate acoustic waves) but neither of them realizes that; it doesn’t even occur to each that the other might be meaning something else by sound. (I’m under the impression that this is relatively rare on LW, but it does constitute a sizeable fraction of all arguments I hear elsewhere, both online and in person.)
Well reasoned.
Yes, you are misunderstanding my position. I don’t think that it’s optimal for most individuals to inform themselves about global warming to a “socially optimal” level where everyone takes the issue sufficiently seriously to take grassroots action to resolve it. Human decisionmaking is only isomorphic to TDT in a limited domain and you can only expect so much association between your decisions and others; if you go that far, you’re putting in too much buck for not enough bang, unless you’re getting utility from the information in other ways. But at the point where you don’t have even basic knowledge of global warming, anticipating a negative marginal utility on informing yourself corresponds to a general policy of ignorance that will serve one poorly with respect to a large class of problems.
If there were no correlation between one person’s decisions and another’s, it would probably not be worth anyone’s time to learn about any sort of societal problems at all, but then, we wouldn’t have gotten to the point of being able to have societal problems in the first place.
Unfortunately that response did not convince me that I’m misunderstanding your position.
If people are not using a TDT decision rule, then your original explicit use of TDT reasoning was irrelevant and I don’t know why you would have invoked it at all unless you thought it was actually relevant. And you continue to imply at least a weaker form of that reasoning.
No one is disputing that there is correlation between people’s decisions. The problem is that correlation does not imply that TDT reasoning works! A little bit of correlation does not imply that TDT works a little bit. Unless people are similar to you AND using TDT, you don’t get to magically drag them along with you by choosing to cooperate.
This is a standard textbook tragedy of the commons problem, plain and simple. From where I’m standing I don’t see the relevance of anything else. If you want to continue disagreeing, can you directly tell me whether you think TDT is still relevant and why?
People don’t use a generalized form of TDT, but human decisionmaking is isomorphic to TDT in some domains. Other people don’t have to consciously be using TDT to sometimes make decisions based on a judgment of how likely it is that other people will behave similarly.
Tragedies of commons are not universally unresolvable. It’s to everyone’s advantage for everyone to pool their resources for some projects for the public good, but it’s also advantageous for each individual to opt out of contributing their resources. But under the institution of governments, we have sufficient incentives to prevent most people from opting out. Simply saying “It’s a tragedy of the commons problem” doesn’t mean there’s no chance of resolving it and therefore no use in knowing about it.
Maybe it would help if you gave me an example of what you have in mind here.
Well, take Stop Voting For Nincompoops, for example. If you were to just spontaneously decide “I’m going to vote for the candidate I really think best represents my principles in hope that that has a positive effect on the electoral process,” you have no business being surprised if barely anyone thinks the same thing and the gesture amounts to nothing. But if you read an essay encouraging you to do so, posted in a place where many people apply reasoning processes similar to your own, the choice you make is a lot more likely to reflect the choice a lot of other people are making.
It seems like this is an example of, at best, a domain on which decisionmaking could use TDT. No one is denying that people could use TDT, though. I was hoping for you to demonstrate an example where people actually seem to be behaving in accordance with TDT. (It is not enough to just argue that people reason fairly similarly in certain domains).
“Isomorphic” is a strong word. Let me know if you have a better example.
Anyway let me go back to this from your previous comment:
No one is claiming tragedies of the commons are always unresolvable. We are claiming that unresolved tragedies of the commons are tragedies of the commons! You seem to be suggesting that knowledge is a special thing which enables us to possibly resolve tragedies of the commons and therefore we should seek it out. But in the context of global warming and the current discussion, knowledge-collection is the tragedy of the commons. To the extent that people are underincentivized to seek out knowledge, that is the commons problem we’re talking about.
If you turn around and say, “well they should be seeking out more knowledge because it could potentially resolve the tragedy”...well of course more knowledge could resolve the tragedy of not having enough knowledge, but you have conjured up your “should” from nowhere! The tragedy we’re discussing is what exists after rational individuals decide to gather exactly as much information as a rational agent “should,” where should is defined with respect to that agent’s preferences and the incentives he faces.
Final question: If TDT reasoning did magically get us to the level of informedness on global warming that you think we rationally should be attaining, and if we are not attaining that level of informedness, does that not imply that we aren’t using TDT reasoning? And if other people aren’t using TDT reasoning, does that not imply that it is NOT a good idea for me to start using it? You seem to think that TDT has something to do with how rational agents “should” behave here, but I just don’t see how TDT is relevant.
NO! It implies that you go ahead and use TDT reasoning—which tells you to defect in this case! TDT is not about cooperation!
wedrifid, RIGHT. Sorry, got a little sloppy.
By “TDT reasoning”—I know, I know—I have been meaning Desrtopa’s use of “TDT reasoning,” which seems to be like TDT + [assumption that everyone else is using TDT].
I shouldn’t say that TDT is irrelevant, but really that it is a needless generalization in this context. I meant that Desrtopa’s invocation of TDT was irrelevant, in that it did nothing to fix the commons problem that we were initially discussing without mention of TDT.
Lack of knowledge of global warming isn’t the tragedy of the commons I’m talking about; even if everyone were informed about global warming, it doesn’t necessarily mean we’d resolve it. Humans can suffer from global climate change despite the entire population being informed about it, and we might find a way to resolve it that works despite most of the population being ignorant.
The question a person starting from a position of ignorance about climate change has to answer is “should I expect that learning about this issue has benefits to me in excess of the effort I’ll have to put in to learn about it?” An answer of “no” corresponds to a low general expectation of information value considering the high availability of the information.
The reason I brought up TDT was as an example of reasoning that relies on a correlation between one agent’s choices and another’s. I didn’t claim at any point that people are actually using TDT. However, if decision theory that assumes correlation between people’s decisions did not outcompete decision theory which does not assume any correlation, we wouldn’t have evolved cooperative tendencies in the first place.
Determining that the gesture amounts to less than the gesture of going in to the poll booth and voting for one of the two party lizards seems rather difficult.
Of course, it’s in practice nearly impossible for me to determine through introspection whether what feels like a “spontaneous” decision on my part is in fact being inspired by some set of external stimuli, and if so which stimuli. And without that data, it’s hard to predict the likelihood of other people being similarly inspired.
So I have no business being too surprised if lots of people do think the same thing, either, even if I can’t point to an inspirational essay in a community of similar reasoners as a mechanism.
In other words, sometimes collective shifts in attitude take hold in ways that feel entirely spontaneous (and sometimes inexplicably so) to the participants.
He may be mistaken about how high trust the society he lives in is. This is something it is actually surprisingly easy to be wrong about, since our intuitions aren’t built for a society of hundreds of millions living across an entire continent, our minds don’t understand that our friends, family and co-workers are not a representative sample of the actual “tribe” we are living in.
Even if that is the case he is still mistaken about game theory. While the ‘high trust society’ you describe would encourage cooperation to the extent that hypocrisy does not serve as a substitute the justifications Desrtopa is given are in terms of game theory and TDT. It relies on acting as if other agents are TDT agents when they are not—an entirely different issue to dealing with punishment norms by ‘high trust’ agents.
Sure.
We are in agreement on that. But this might better explain why, on second thought, I think it dosen’t matter, at least not in this sense, on the issue of whether educating people about global warming matters.
I think we may have been arguing against a less than most charitable interpretation of his argument, which I think isn’t that topical a discussion (even if it serves to clear up a few misconceptions). If the less than charitable argument is the interpretation he now thinks or even actually did intend, dosen’t seem that relevant to me.
“rah cooperation” I think in practice translates into “I think I live in a high trust enough society that its useful to use this signal to get people to ameliorate this tragedy of the commons situation I’m concerned about.”