Significant anthropogenic global warming is occurring: 70.7, (55, 85, 95)
I’m rather shocked that the numbers on this are so low. It’s higher than polls indicate as the degree of acceptance in America, but then, we’re dealing with a public where supposedly half of the people believe that tomatoes only have genes if they are genetically modified. Is this a subject on which Less Wrongers are significantly meta-contrarian?
I’m also a bit surprised (I would have excepted high figures), but be careful to not misinterpret the data : it doesn’t say that 70.7% of LWers believe in “anthropogenic global warming”, but it does an average on probabilities. If you look at the quarters, even the 25% quarter is at p = 55% meaning that less than 25% of LWers give a lower than half probability.
It seems to indicate that almost all LWers believe in it being true (p>0.5 that it is true), but many of them do so with a low confidence. Either because they didn’t study the field enough (and therefore, refuse to put too much strength in their belief) or because they consider the field too complicated/not well enough understood to be a too strong probability in it.
That’s how I interpreted it in the first place; “believe in anthropogenic global warming” is a much more nebulous proposition anyway. But while anthropogenic global warming doesn’t yet have the same sort of degree of evidence as, say, evolution, I think that an assignment of about 70% probability represents either critical underconfidence or astonishingly low levels of familiarity with the data.
astonishingly low levels of familiarity with the data.
It doesn’t astonish me. It’s not a terribly important issue for everyday life; it’s basically a political issue.
I think I answered somewhere around 70%; while I’ve read a bit about it, there are plenty of dissenters and the proposition was a bit vague.
The claim that changing the makeup of the atmosphere in some way will affect climate in some way is trivially true; a more specific claim requires detailed study.
It doesn’t astonish me. It’s not a terribly important issue for everyday life; it’s basically a political issue.
I would say that it’s considerably more important for everyday life for most people than knowing whether tomatoes have genes.
Climate change may not represent a major human existential risk, but while the discussion has become highly politicized, the question of whether humans are causing large scale changes in global climate is by no means simply a political question.
If the Blues believe that asteroid strikes represent a credible threat to our civilization, and the Greens believe they don’t, the question of how great a danger asteroid strikes actually pose will remain a scientific matter with direct bearing on survival.
I would say that it’s considerably more important for everyday life for most people than knowing whether tomatoes have genes.
I disagree actually.
For most people neither global warming nor tomatoes having genes matters much. But if I had to choose, I’d say knowing a thing or two about basic biology has some impact on how you make your choices with regards to say healthcare or how much you spend on groceries or what your future shock level is.
Global warming, even if it does have a big impact on your life will not be much affected by you knowing anything about it. Pretty much anything an individual could do against it has a very small impact on how global warming will turn out. Saving 50$ a month or a small improvement in the odds of choosing the better treatment has a pretty measurable impact on him.
Taking global warming as a major threat for now (full disclosure: I think global warming, is not a threat to human survival though it may contribute to societal collapse in a worst case scenario), it is quite obviously a tragedy of the commons problem.
There is no incentive for an individual to do anything about it or even know anything about it, except to conform to a “low carbon footprint is high status” meme in order to derive benefit in his social life and feeling morally superior to others.
You don’t need to know whether a tomato has genes to know who has a reputation as a good doctor and to do what they say. It might effect your buying habits it you believe that eating genes is bad for you, but it’s entirely probable that a person will make their healthcare and shopping decisions without any reference to the belief at all.
As I just said to xv15, in a tragedy of the commons situation, you either want to conserve if you think enough people are running a sufficiently similar decision algorithm, or you want a policy of conservation in place. The rationalist doesn’t want to fight the barbarians, but they’d rather that they and everyone else on their side be forced to fight.
The rationalist doesn’t want to fight the barbarians, but they’d rather that they and everyone else on their side be forced to fight.
So one should just start fighting and hope others follow? Why not just be a hypocrite, we humans are good at it, that way you can promote the social norm you wish to inspire with a much smaller cost!
It comes out much better after cost benefit analysis. Yay rationality! :D
You don’t need to know whether a tomato has genes to know who has a reputation as a good doctor and to do what they say.
Why bother to learn what global warming if it suffices for you to know it is a buzzword that makes the hybrid car you are going to buy more trendy than your neighbours pick up truck or your old Toyota (while ignoring the fact that a car has already left most of its carbon footprint by the time its rolled off the assembly line and delivered to you)?
So one should just start fighting and hope others follow? Why not just be a hypocrite, we humans are good at it, that way you can promote the social norm you wish to inspire with a much smaller cost!
If you’re in a population of similar agents, you should expect other people to be doing the same thing, and you’ll be a lot more likely to lose the war than if you actually fight. And if you’re not in a population where you can rely on other people choosing similarly, you want a policy that will effectively force everyone to fight. Any action that “promotes the social norm” but does not really enforce the behavior may be good signaling within the community, but will be useless with respect to not getting killed by barbarians.
Why bother to learn what global warming if it suffices for you to know it is a buzzword that makes the hybrid car you are going to buy more trendy than your neighbours pick up truck or your old Toyota (while ignoring the fact that a car has already left most of its carbon footprint by the time its rolled off the assembly line and delivered to you)?
A person who only believes in the signalling value of green technologies (hybrids are trendy) does not want a social policy mandating green behavior (the behaviors would lose their signaling value.)
A person who only believes in the signalling value of green technologies (hybrids are trendy) does not want a social policy mandating green behavior (the behaviors would lose their signaling value.)
A social policy mandating a behavior that is typical of a subgroup shows that that subgroup wields a lot of political power and thus gives it higher status—those pesky blues will have to learn who’s the boss! Hence headscarves forbidden or compulsory, recycling, “in God we trust” as a motto, etc.
If you’re in a population of similar agents, you should expect other people to be doing the same thing, and you’ll be a lot more likely to lose the war than if you actually fight.
I am familiar with the argument, it just happens to be I don’t this this is so, at least not when it comes to coordination on global warming. I should have made that explicit though.
Any action that “promotes the social norm” but does not really enforce the behavior may be good signaling within the community, but will be useless with respect to not getting killed by barbarians.
I don’t think you grok how hypocrisy works. By promoting the social norms I don’t follow I make life harder for less skilled hypocrites. The harder life gets for them, the more of them should switch to just following the norms, if that happens to be cheaper.
Sufficiently skilled hypocrites are the enforcers of societal norms.
Also where does this strange idea of a norm not being really enforced come from? Of course it is! The idea that anything worthy of the name social norm isn’t really enforced is an idea that’s currently popular but obviously silly, mostly since it allows us to score status points by pretending to be violating long dead taboos.
The mention of hypocrisy seems to have immediately jumped a few lanes and landed in “dosen’t really enforce”. Ever heard of a double standard? No human society ever has worked without a few. It is perfectly possible to be a mean lean norm enforcing machine and not follow them.
A person who only believes in the signalling value of green technologies (hybrids are trendy) does not want a social policy mandating green behavior (the behaviors would lose their signaling value.)
He may not want its universal or near universal adoption (lets leave aside if its legislated or not) but it is unavoidable. That’s just how fashion works, absent material constraints it drifts downwards. And since most past, present and probably most future societies are not middle class dominated societies, one can easily argue that lower classes embracing ecological conspicuousness might do more good. Consuming products based on how they judge it (mass consumption is still what drives the economy) and voting on it (since votes are signalling affiliation in the first approximation).
Also at the end of the day well of people still often cooperate on measures such as mandatory school uniforms.
I don’t believe it’s so either. I think that even assuming they believed global warming was a real threat, much or most of the population would not choose to carry their share of the communal burden. This is the sort of situation where you want an enforced policy ensuring cooperation.
In places where rule of law breaks down, a lot of people engage in actions such as looting, but they still generally prefer to live in societies where rules against that sort of thing are enforced.
In places where people are not much like you, where people don’t know you well (or there are other factors making hypocrisy relatively easy to get away with) you shouldn’t bother promoting costly norms by actually following them.
You probably get more expected utility if you are a hypocrite in such circumstances.
That’s true. But it’s still to your advantage to be in a society where rules promoting the norm are enforced. If you’re in a society which doesn’t have that degree of cohesiveness and is to averse to enforcing cooperation, then you don’t want to fight the barbarians, you want to stay at home and get killed later. This is a society you really don’t want to be in though; things have to be pretty hopeless before it’s no longer in your interest to promote a policy of cooperation.
This is a society you really don’t want to be in though; things have to be pretty hopeless before it’s no longer in your interest to promote a policy of cooperation.
This actually makes more sense if you reverse it! Promoting costly norms by following them yourself regardless of the behavior of others only becomes the best policy when the consequences of that norm not being followed is dire!
When I say “things have to be pretty hopeless,” I mean that the prospects for amelioration are low, not that the consequences are dire. Assuming the consequences are severe, taking costly norms on oneself to prevent it makes sense unless the chances of it working are very low.
To avoid slipping into “arguments as soldiers” mode, I just wanted to state that I do think environmental related tragedy of the commons are a big problem for us (come on the trope namer is basically one!) and we should devote resources to attempt to solve or ameliorate them.
I, on the other hand, find myself among environmentalists thinking that the collective actions they’re promoting mostly have negative individual marginal utility. But I think that acquiring the basic information has positive individual marginal utility (I personally suspect that the most effective solutions to climate change are not ones that hinge on grassroots conservation efforts, but effective solutions will require people to be aware of the problem and take it seriously.)
Wait a sec. Global warming can be important for everyday life without it being important that any given individual know about it for everyday life. In the same way that matters of politics have tremendous bearing on our lives, yet the average person might rationally be ignorant about politics since he can’t have any real effect on politics. I think that’s the spirit in which thomblake means it’s a political matter. For most of us, the earth will get warmer or it won’t, and it doesn’t affect how much we are willing to pay for tomatoes at the grocery store (and therefore it doesn’t change our decision rule for how to buy tomatoes), although it may effect how much tomatoes cost.
(It’s a bit silly, but on the other hand I imagine one could have their preferences for tomatoes depend on whether tomatoes had “genes” or not.)
This is a bit like the distinction between microeconomics and macroeconomics. Macroeconomics is the stuff of front page newspaper articles about the economy, really very important stuff. But if you had to take just one economics class, I would recommend micro, because it gives you a way of thinking about choices in your daily life, as opposed to stuff you can’t have any real effect on.
You don’t have much influence on an election if you vote, but the system stops working if everyone acts only according to the expected value of their individual contribution.
Exactly, it IS the tragedy of the commons, but that supports my point, not yours. It may be good for society if people are more informed about global warming, but society isn’t what makes decisions. Individuals make decisions, and it’s not in the average individual’s interest to expend valuable resources learning more about global warming if it’s going to have no real effect on the quality of their own life.
Whether you think it’s an individual’s “job” or not to do what’s socially optimal, is completely besides the point here. The fact is they don’t. I happen to think that’s pretty reasonable, but it doesn’t matter how we wish people would behave, in order to predict how they will behave.
Let me try to be clear, since you might be wondering why someone (not me) downvoted you: You started by noting your shock that people aren’t that informed about global warming. I said we shouldn’t necessarily be surprised that they aren’t that informed about global warming. You responded that we’re suffering from the tragedy of the commons, or the tragedy of the rationalists versus the barbarians. I respond that I agree with what you say but not with what you seem to think it means. When we unearth a tragedy of the commons, we don’t go, “Aha! These people have fallen into a trap and if they saw the light, they would know to avoid it!” Casting light on the tragedy of the commons does not make it optimal for individuals to avoid it.
Casting light on the commons is a way of explaining why people would be behaving in such a socially suboptimal way, not a way of bolstering our shock over their behavior.
In a tragedy of the commons, it’s in everybody’s best interests for everybody to conserve resources. If you’re running TDT in a population with similar agents, you want to conserve, and if you’re in a population of insufficiently similar agents, you want an enforced policy of conservation. The rationalist in a war with the barbarians might not want to fight, but because they don’t want to lose even more, they will fight if they think that enough other people are running a similar decision algorithm, and they will support a social policy that forces them and everyone else to fight. If they think that their side can beat the barbarians with a minimal commitment of their forces, they won’t choose either of these things.
If you’re running TDT in a population with similar agents, you want to conserve
And this is why xv15 is right and Desrtopa is wrong. Orther people do not run TDT or anything similar. Individuals who cooperate with such a population are fools.
TDT is NOT a magic excuse for cooperation. It calls for cooperation in cases when CDT does not only when highly specific criteria are met.
At the Paris meetup Yvain proposed that voting might be rational for TDT-ish reasons, to which I replied that if you have voted for losing candidates at past elections, that means not enough voters are correlated with you. Though now that I think of it, maybe the increased TDT-ish impact of your decision could outweigh the usual arguments against voting, because they weren’t very strong to begin with.
Individuals who cooperate with such a population are fools.
But sometimes it works out anyway. Lots of people can be fools. And lots of people can dislike those who aren’t fools.
People often think “well if everyone did X sufficiently unpleasant thing would happen, therefore I won’t do it”. They also implicitly believe, though they may not state “most people are like me in this regard”. They will also say with their facial expressions and actions though not words “people who argue against this are mean and selfish”.
In other words I just described a high trust society. I’m actually pretty sure if you live in Switzerland you could successfully cooperate with the Swiss on global warming for example. Too bad global warming isn’t just a Swiss problem.
And lots of people can dislike those who aren’t fools.
Compliance with norms so as to avoid punishment is a whole different issue. And obviously if you willfully defy the will of the tribe when you know that the punishment exceeds the benefit to yourself then you are the fool and the compliant guy is not.
They will also say with their facial expressions and actions though not words “people who argue against this are mean and selfish”.
Of course they will. That’s why we invented lying! I’m in agreement with all you’ve been saying about hypocrisy in the surrounding context.
I agree. Desrtopa is taking Eliezer’s barbarians post too far for a number of reasons.
1) Eliezer’s decision theory is at the least controversial which means many people here may not agree with it.
2) Even if they agree with it, it doesn’t mean they have attained rationality in Eliezer’s sense.
3) Even if they have attained this sort of rationality, we are but a small community, and the rest of the world is still not going to cooperate with us. Our attempts to cooperate with them will be impotent.
Desrtopa: Just because it upholds an ideal of rationality that supports cooperation, does not mean we have attained that ideal. Again, the question is not what you’d like to be true, but about what’s actually true. If you’re still shocked by people’s low confidence in global warming, it’s time to consider the possibility that your model of the world—one in which people are running around executing TDT—is wrong.
Desrtopa is taking Eliezer’s barbarians post too far for a number of reasons.
Those are all good reasons but as far as I can tell Desrtopa would probably give the right answer if questioned about any of those. He seems to be aware of how people actually behave (not remotely TDTish) but this gets overridden by a flashing neon light saying “Rah Cooperation!”.
There are plenty of ways in which I personally avoid cooperation for my own benefit. But in general I think that a personal policy of not informing oneself at even a basic level about tragedies of commons where the information is readily available is not beneficial, because humans have a sufficiently developed propensity for resolving tragedies of commons to give at least the most basic information marginal benefit.
To me, this comment basically concedes that you’re wrong but attempts to disguise it in a face-saving way. If you could have said that people should be informing themselves at the socially optimal level, as you’ve been implying with your TDT arguments above, you would have. Instead, you backed off and said that people ought to be informing themselves at least a little.
Just to be sure, let me rewrite your claim precisely, in the sense you must mean it given your supposed continued disagreement:
In general I think that a personal policy of not informing oneself at even a basic level about tragedies of commons where the information is readily available is not beneficial to the individual, because humans have a sufficiently developed propensity for resolving tragedies of commons to give at least the most basic information marginal benefit to the individual.
Assuming that’s what you’re saying, it’s easy to see that even this is an overreach. The question on the table is whether people should be informing themselves about global warming. Whether the first epsilon of information one gets from “informing oneself” (as opposed to hearing the background noise) is beneficial to the individual relative to the cost of attaining it, is a question of derivatives of cost and benefit functions at zero, and it could go either way. You simply can’t make a general statement about how these derivatives relate for the class of Commons Problems. But more importantly, even if you could, SO WHAT? The question is not whether people should be informing themselves a bit, the question is whether they should be informing themselves at anywhere close to the socially optimal level. And by admitting it’s a tragedy of the commons, we are already ANSWERING that question.
Does that make sense? Am I misunderstanding your position? Has your position changed?
To me, this comment basically concedes that you’re wrong but attempts to disguise it in a face-saving way.
It seems that you are trying to score points for winning the debate. If your interlocutor indeed condedes something in a face-saving way, forcing him to admit it is useless from the truth-seeking point of view.
prase, I really sympathize with that comment. I will be the first to admit that forcing people to concede their incorrectness is typically not the best way of getting them to agree on the truth. See for example this comment.
BUT! On this site we sort of have TWO goals when we argue, truth-seeking and meta-truth-seeking. Yes, we are trying to get closer to the truth on particular topics. But we’re also trying to make ourselves better at arguing and reasoning in general. We are trying to step back and notice what we’re doing, and correct flaws when they are exposed to our scrutiny.
If you look back over this debate, you will see me at several points deliberately stepping back and trying to be extremely clear about what I think is transpiring in the debate itself. I think that’s worth doing, on lesswrong.
To defend the particular sentence you quote: I know that when I was younger, it was entirely possible for me to “escape” from a debate in a face-saving way without realizing I had actually been wrong. I’m sure this still happens from time to time...and I want to know if it’s happening! I hope that LWers will point it out. On LW I think we ought to prioritize killing biases over saving faces.
I know that when I was younger, it was entirely possible for me to “escape” from a debate in a face-saving way without realizing I had actually been wrong. I’m sure this still happens from time to time...and I want to know if it’s happening! I hope that LWers will point it out.
The key question is: would you believe it if it were your opponent in a heated debate who told you?
I’d like to say yes, but I don’t really know. Am I way off-base here?
Probably the most realistic answer is that I would sometimes believe it, and sometimes not. If not often enough, it’s not worth it. It’s too bad there aren’t more people weighing in on these comments because I’d like to know how the community thinks my priorities should be set. In any case you’ve been around for longer so you probably know better than I.
Alice: “But Z is irrelevant with respect to X’, which is what I actually mean.”
Now, Bob agrees with X’. What will Bob say?
“Fine, we agree after all.”
“Yes, but remember that X is problematic and not entirely equivalent to X’.”
“You should openly admit that you were wrong with X.”
If I were in place of Alice, (1) would cause me to abandon X and believe X’ instead. For some time I would deny that they aren’t equivalent or think that my saying X was only poor formulation on my part and that I have always believed X’. Later, when I cease identifying with my past self too much, I would admit (at least to myself) that I have changed my opinion. (2) would have similar effects, with more resent directed at Bob. In case of (3) I would perhaps try to continue debating to win the lost points back by pointing out weak points of Bob’s opinions or debating style, and after calming down I would believe that Bob is a jerk and search hard to find reasons why Z is a bad argument. Eventually I would (hopefully) move to X’ too (I don’t like to believe things which are easily attacked), but it would take longer. I would certainly not admit my error on the spot.
(The above is based on memories of my reactions in several past debates, especially before I read about cognitive biases and such.)
Now, to tell how generalisable are our personal anecdotes, we should organise an experiment. Do you have any idea how to do it easily?
Later, when I cease identifying with my past self too much, I would admit (at least to myself) that I have changed my opinion.
I think the default is that people change specific opinions more in response to the tactful debate style you’re identifying, but are less likely to ever notice that they have in fact changed their opinion. I think explicitly noticing one’s wrongness on specific issues can be really beneficial in making a person less convinced of their rightness more globally, and therefore more willing to change their mind in general. My question is how we ought to balance these twin goals.
It would be much easier to get at the first effect by experiment than the second, since the latter is a much more long-term investment in noticing one’s biases more generally. And if we could get at both, we would still have to decide how much we care about one versus the other, on LW.
Personally I am becoming inclined to give up the second goal.
Since here on LW changing one’s opinion is considered a supreme virtue, I would even suspect that the long-term users are confabulating that they have changed their opinion when actually they didn’t. Anyway, a technique that might be useful is keeping detailed diaries of what one thinks and review them after few years (or, for that matter, look at what one has written on the internet few years ago). The downside is, of course, that writing beliefs down may make their holders even more entrenched.
The downside is, of course, that writing beliefs down may make their holders even more entrenched.
Entirely plausible—cognitive dissonance, public commitment, backfire effect, etc. Do you think this possibility negates the value, or are there effective counter-measures?
“it’s best if we notice and acknowledge when we’re wrong, and therefore I will do my best to notice and acknowledge when I’m wrong”
“it’s best if we notice and acknowledge when we’re wrong, and therefore I will upvote, praise, and otherwise reinforce such acknowledgements when I notice them” and
“it’s best if we notice and acknowledge when we’re wrong, and therefore I will downvote, criticize, and otherwise punish failure to do so.”
True in the immediate sense, but I disagree in the global sense that we should encourage face-saving on LW, since doing so will IMO penalize truth-seeking in general. Scoring points for winning the debate is a valid and important mechanism for reinforcing behaviors that lead to debate-winning, and should be allowed in situations where debate-winning correlates to truth-establishment in general, not just for the arguing parties.
This is also true in the immediate sense, but somehow implies that the debate-winning behaviours are a net positive with respect to truth seeking at least in some possible (non-negligibly frequent) circumstances. I find the claim dubious. Can you specify in what circumstances is the debate winning argumentation style superior to leaving a line of retreat?
Line of retreat is superior for convincing your debate partner, but debate-winning behavior may be superior for convincing uninvolved readers, because it encourages verbal admission of fault which makes it easier to discern the prevailing truth as a reader.
debate-winning behavior may be superior for convincing uninvolved readers, because it encourages verbal admission of fault
That isn’t actually the reason. The reason debate-winning behavior is superior for convincing bystanders is that it appeals to their natural desire to side with the status-gaining triumphant party. As such, it is a species of Dark Art.
This is what I am not sure about. I know that I will be more likely to admit being wrong when I have chance do do it in a face-saving way (this includes simply saying “you are right” when I am doing it voluntarily and the opponent has debated in a civillised way up to that point) than when my interlocutor tries to force me to do that. I know it but still can’t easily get rid of that bias.
There are several outcomes of a debate where one party is right and the other is wrong:
The wrong side admit their wrongness.
The wrong side don’t want to admit their wrongness but realise that they have no good arguments and drop from the debate.
The wrong side don’t want to admit their wrongness and still continue debating in hope of defeating the opponent or at least achieving a honourable draw.
The wrong side don’t even realise their wrongness.
The exact flavour of debate-winning behaviour I have criticised makes 2 difficult or impossible, consequently increasing probabilities of 1, 3 or 4. 1 is superior to 2 from almost any point of view, but 2 is similarly superior to 3 and 4 and it is far from clear whether the probability of 1 increases more than probabilities of 3 and 4 combined when 2 ceases to be an option, or whether it increases at all.
Or where both sides admit their wrongness and switch their opinions, or where a third side intervenes and bans them both for trolling. Next time I’ll try to compose a more exhaustive list.
Don’t forget the case where the two parties are talking at cross purposes (e.g. Alice means that a tree falling in a forest with no-one around generates no auditory sensations and Bob means that it does generate acoustic waves) but neither of them realizes that; it doesn’t even occur to each that the other might be meaning something else by sound. (I’m under the impression that this is relatively rare on LW, but it does constitute a sizeable fraction of all arguments I hear elsewhere, both online and in person.)
Yes, you are misunderstanding my position. I don’t think that it’s optimal for most individuals to inform themselves about global warming to a “socially optimal” level where everyone takes the issue sufficiently seriously to take grassroots action to resolve it. Human decisionmaking is only isomorphic to TDT in a limited domain and you can only expect so much association between your decisions and others; if you go that far, you’re putting in too much buck for not enough bang, unless you’re getting utility from the information in other ways. But at the point where you don’t have even basic knowledge of global warming, anticipating a negative marginal utility on informing yourself corresponds to a general policy of ignorance that will serve one poorly with respect to a large class of problems.
If there were no correlation between one person’s decisions and another’s, it would probably not be worth anyone’s time to learn about any sort of societal problems at all, but then, we wouldn’t have gotten to the point of being able to have societal problems in the first place.
Unfortunately that response did not convince me that I’m misunderstanding your position.
If people are not using a TDT decision rule, then your original explicit use of TDT reasoning was irrelevant and I don’t know why you would have invoked it at all unless you thought it was actually relevant. And you continue to imply at least a weaker form of that reasoning.
No one is disputing that there is correlation between people’s decisions. The problem is that correlation does not imply that TDT reasoning works! A little bit of correlation does not imply that TDT works a little bit. Unless people are similar to you AND using TDT, you don’t get to magically drag them along with you by choosing to cooperate.
This is a standard textbook tragedy of the commons problem, plain and simple. From where I’m standing I don’t see the relevance of anything else. If you want to continue disagreeing, can you directly tell me whether you think TDT is still relevant and why?
People don’t use a generalized form of TDT, but human decisionmaking is isomorphic to TDT in some domains. Other people don’t have to consciously be using TDT to sometimes make decisions based on a judgment of how likely it is that other people will behave similarly.
Tragedies of commons are not universally unresolvable. It’s to everyone’s advantage for everyone to pool their resources for some projects for the public good, but it’s also advantageous for each individual to opt out of contributing their resources. But under the institution of governments, we have sufficient incentives to prevent most people from opting out. Simply saying “It’s a tragedy of the commons problem” doesn’t mean there’s no chance of resolving it and therefore no use in knowing about it.
Well, take Stop Voting For Nincompoops, for example. If you were to just spontaneously decide “I’m going to vote for the candidate I really think best represents my principles in hope that that has a positive effect on the electoral process,” you have no business being surprised if barely anyone thinks the same thing and the gesture amounts to nothing. But if you read an essay encouraging you to do so, posted in a place where many people apply reasoning processes similar to your own, the choice you make is a lot more likely to reflect the choice a lot of other people are making.
It seems like this is an example of, at best, a domain on which decisionmaking could use TDT. No one is denying that people could use TDT, though. I was hoping for you to demonstrate an example where people actually seem to be behaving in accordance with TDT. (It is not enough to just argue that people reason fairly similarly in certain domains).
“Isomorphic” is a strong word. Let me know if you have a better example.
Anyway let me go back to this from your previous comment:
Tragedies of commons are not universally unresolvable....Simply saying “It’s a tragedy of the commons problem” doesn’t mean there’s no chance of resolving it and therefore no use in knowing about it.
No one is claiming tragedies of the commons are always unresolvable. We are claiming that unresolved tragedies of the commons are tragedies of the commons! You seem to be suggesting that knowledge is a special thing which enables us to possibly resolve tragedies of the commons and therefore we should seek it out. But in the context of global warming and the current discussion, knowledge-collection is the tragedy of the commons. To the extent that people are underincentivized to seek out knowledge, that is the commons problem we’re talking about.
If you turn around and say, “well they should be seeking out more knowledge because it could potentially resolve the tragedy”...well of course more knowledge could resolve the tragedy of not having enough knowledge, but you have conjured up your “should” from nowhere! The tragedy we’re discussing is what exists after rational individuals decide to gather exactly as much information as a rational agent “should,” where should is defined with respect to that agent’s preferences and the incentives he faces.
Final question: If TDT reasoning did magically get us to the level of informedness on global warming that you think we rationally should be attaining, and if we are not attaining that level of informedness, does that not imply that we aren’t using TDT reasoning? And if other people aren’t using TDT reasoning, does that not imply that it is NOT a good idea for me to start using it? You seem to think that TDT has something to do with how rational agents “should” behave here, but I just don’t see how TDT is relevant.
By “TDT reasoning”—I know, I know—I have been meaning Desrtopa’s use of “TDT reasoning,” which seems to be like TDT + [assumption that everyone else is using TDT].
I shouldn’t say that TDT is irrelevant, but really that it is a needless generalization in this context. I meant that Desrtopa’s invocation of TDT was irrelevant, in that it did nothing to fix the commons problem that we were initially discussing without mention of TDT.
You seem to be suggesting that knowledge is a special thing which enables us to possibly resolve tragedies of the commons and therefore we should seek it out. But in the context of global warming and the current discussion, knowledge-collection is the tragedy of the commons. To the extent that people are underincentivized to seek out knowledge, that is the commons problem we’re talking about.
Lack of knowledge of global warming isn’t the tragedy of the commons I’m talking about; even if everyone were informed about global warming, it doesn’t necessarily mean we’d resolve it. Humans can suffer from global climate change despite the entire population being informed about it, and we might find a way to resolve it that works despite most of the population being ignorant.
The question a person starting from a position of ignorance about climate change has to answer is “should I expect that learning about this issue has benefits to me in excess of the effort I’ll have to put in to learn about it?” An answer of “no” corresponds to a low general expectation of information value considering the high availability of the information.
The reason I brought up TDT was as an example of reasoning that relies on a correlation between one agent’s choices and another’s. I didn’t claim at any point that people are actually using TDT. However, if decision theory that assumes correlation between people’s decisions did not outcompete decision theory which does not assume any correlation, we wouldn’t have evolved cooperative tendencies in the first place.
If you were to just spontaneously decide “I’m going to vote for the candidate I really think best represents my principles in hope that that has a positive effect on the electoral process,” you have no business being surprised if barely anyone thinks the same thing and the gesture amounts to nothing.
Determining that the gesture amounts to less than the gesture of going in to the poll booth and voting for one of the two party lizards seems rather difficult.
Of course, it’s in practice nearly impossible for me to determine through introspection whether what feels like a “spontaneous” decision on my part is in fact being inspired by some set of external stimuli, and if so which stimuli. And without that data, it’s hard to predict the likelihood of other people being similarly inspired.
So I have no business being too surprised if lots of people do think the same thing, either, even if I can’t point to an inspirational essay in a community of similar reasoners as a mechanism.
In other words, sometimes collective shifts in attitude take hold in ways that feel entirely spontaneous (and sometimes inexplicably so) to the participants.
Those are all good reasons but as far as I can tell Desrtopa would probably give the right answer if questioned about any of those. He seems to be aware of how people actually behave (not remotely TDTish) but this gets overridden by a flashing neon light saying “Rah Cooperation!”.
He may be mistaken about how high trust the society he lives in is. This is something it is actually surprisingly easy to be wrong about, since our intuitions aren’t built for a society of hundreds of millions living across an entire continent, our minds don’t understand that our friends, family and co-workers are not a representative sample of the actual “tribe” we are living in.
He may be mistaken about how high trust the society he lives in is.
Even if that is the case he is still mistaken about game theory. While the ‘high trust society’ you describe would encourage cooperation to the extent that hypocrisy does not serve as a substitute the justifications Desrtopa is given are in terms of game theory and TDT. It relies on acting as if other agents are TDT agents when they are not—an entirely different issue to dealing with punishment norms by ‘high trust’ agents.
We are in agreement on that. But this might better explain why, on second thought, I think it dosen’t matter, at least not in this sense, on the issue of whether educating people about global warming matters.
I think we may have been arguing against a less than most charitable interpretation of his argument, which I think isn’t that topical a discussion (even if it serves to clear up a few misconceptions). If the less than charitable argument is the interpretation he now thinks or even actually did intend, dosen’t seem that relevant to me.
“rah cooperation” I think in practice translates into “I think I live in a high trust enough society that its useful to use this signal to get people to ameliorate this tragedy of the commons situation I’m concerned about.”
In which case you want an enforced policy conforming to the norm. A rational shepherd in a communal grazing field may not believe that if he doesn’t let his flock overgraze, other shepherds won’t either, but he’ll want a policy punishing or otherwise preventing overgrazers.
In which case you want an enforced policy conforming to the norm.
Yes, and this means that individuals with the ability to influence or enforce policy about global warming can potentially benefit somewhat from knowing about global warming. For the rest of the people (nearly everyone) knowledge about global warming is of no practical benefit.
If the public doesn’t believe in or care about climate change, then public officials who care about climate change won’t get into power in the first place.
In a society that doesn’t believe the barbarians pose a credible threat, they won’t support a leader who wants to make everyone cooperate in the war.
Again you have jumped back to what benefits society—nobody questions the idea that it is bad for the society if the population doesn’t know Commons_Threat_X. Nobody questions the idea that it is bad for an individual if everyone else doesn’t know about Commons_Threat_X. What you seem unwilling to concede is that there is negligible benefit to that same individual for him, personally learning about Commons_Threat_X (in a population that isn’t remotely made up of TDT agents).
In a population of sufficiently different agents, yes. If you’re in a population where you can rationally say “I don’t know whether this is a credible threat or not, but no matter how credible threat it is, my learning about it and taking it seriously will not be associated with other people learning about it or taking it seriously,” then there’s no benefit in learning about Commons Threat X. If you assume no association of your action with other people’s, there’s no point in learning about any commons threat ever. But people’s actions are sufficiently associated to resolve some tragedies of commons, so in general when people are facing commons problems, it will tend to be to their benefit to make themselves aware of them when the information is made readily available.
Any agent encountering a scenario that can be legitimately described as commons problem with a large population of humans will either defect or be irrational. It really is that simple. Cooperate-Bots are losers.
(Note that agents with actually altruistic preferences are a whole different question.)
It really is that simple. Cooperate-Bots are losers.
Yes. Unless being cooperative makes more Cooperative bots (and not more defecting bots) than defecting makes rational bots (and not Cooperative bots) or used to do so and the vast majority of the population are still cooperative bots.
Evolution has in some specific circumstances made humans cooperate and be collectively better off in situations where rational agents with human values wouldn’t have. That’s the beauty of us being adaptation-executers, not fitness-maximizers.
A rational agent among humans could easily spend his time educating them about global warming, if the returns are high enough (I’m not talking about book revenues or payment for appearances or some irrational philanthropist paying him to do so, I’m actually talking about the returns of ameliorating the negative effects of global warming) and the costs low enough. That’s the interesting version of the debate about it being more “important” people know about global warming than tomatoes having genes.
A rational agent among irrational agents can actually be better off helping them cooperate and coordinate to avoid a specific situation in certain conditions rather than just plain old defecting.
I would add that adjective to agree with this sentence. Humans are agents, but they aren’t rational.
If your reread the sentence you may note that I was careful to make that adjective redundant—sufficiently redundant as to border on absurd. “A rational agent will X or be irrational” is just silly. “A rational agent will X” would have been true but misses the point when talking about humans. That’s why I chose to write “An agent will X or be irrational”.
Yes. Unless being cooperative makes more Cooperative bots (and not more defecting bots) than defecting makes rational bots (and not Cooperative bots)
No. Cooperating is different to being a Cooperate-Bot. A rational agent will cooperate when it will create a better outcome via, for example, making other people cooperate. A Cooperate-Bot will cooperate even when it creates bad outcomes and completely independently of the responses of other agents or their environment. The only situations where it can be expected for it to be better to be a Cooperate-Bot than a rational agent that chooses to cooperate are those contrived scenarios where an entity or the environment is specifically constructed to read the mind and motives of the agent and punish it for cooperating for rational reasons.
I don’t understand why you have gone through my various comments here to argue with trivially true statements. I was under the impression that I mostly joined the conversation agreeing with you.
A rational agent among irrational agents can actually be better off helping them cooperate and coordinate to avoid a specific situation in certain conditions rather than just plain old defecting.
Yes. When an agent can influence the behavior of other agents and cooperating in order to do so is of sufficient benefit it will cooperate in order to influence others. If this wasn’t the case we wouldn’t bother considering most of the game theoretic scenarios that we construct.
A Cooperate-Bot will cooperate even when it creates bad outcomes and completely independently of the responses of other agents or their environment.
That dosen’t mean they can’t win, as in being the only bots left standing. It is trivially easy to construct such situations. Obviously this won’t help the individuals.
I don’t understand why you have gone through my various comments here to argue with trivially true statements. I was under the impression that I mostly joined the conversation agreeing with you.
I wasn’t arguing with the statements. I think I even generally affirmed your comments at the start of my comments to avoid confusion. I was just emphasising that while this is settled the argument best version of the argument about the utility of trying to educate other people on global warming probably isn’t.
Also two comments don’t really seem like “going through several of your comments” in my eyes!
If your reread the sentence you may note that I was careful to make that adjective redundant—sufficiently redundant as to border on absurd. “A rational agent will X or be irrational” is just silly. “A rational agent will X” would have been true but misses the point when talking about humans. That’s why I chose to write “An agent will X or be irrational”.
Indeed, I obviously didn’t register the sentence properly, edited.
Desrtopa, can we be careful with it means to be “different” from other agents? Without being careful, we might reach for any old intuitive metric. But it’s not enough to be mentally similar to other agents across just any metric. For your reasoning to work, they have to be executing the same decision rule. That’s the metric that matters here.
Suppose we start out identical but NOT reasoning as per TDT—we defect in the prisoner’s dilemma, say—but then you read some LW and modify your decision rule so that when deciding what to do, you imagine that you’re deciding for both of us, since we’re so similar after all. Well, that’s not going to work too well, is it? My behavior isn’t going to change any, since, after all, you can’t actually influence it by your own reading about TDT.
So don’t be so quick to cast your faith in TDT reasoning. Everyone can look very similar in every respect EXCEPT the one that matters, namely whether they are using TDT reasoning.
With this in mind, if you reread the bayesians versus barbarians post you linked to, you should be able to see that it has the feel more of an existence proof of a cooperate-cooperate equilibrium. It does not say that we will necessarily find ourselves in such an equilibrium just by virtue of being sufficiently similar.
Obviously the relevant difference is in their decision metrics. But human decision algorithms, sloppy and inconsistent though they are, are in some significant cases isomorphic to TDT.
If we were both defecting in the Prisoner’s dilemma, and then I read some of the sequences and thought that we were both similar decisionmakers and stopped defecting, it would be transparently stupid if you hadn’t also been exposed to the same information that led me to make the decision in the first place. If I knew you had also read it, I would want to calculate the expected value of defecting or cooperating given the relative utilities of the possible outcomes and the likelihood that your decision would correspond to my own.
I think you’re assuming much sloppier reasoning on my part than is actually the case (of course I probably have a bias in favor of thinking I’m not engaging in sloppy reasoning, but your comment isn’t addressing my actual position.) Do I think that if I engage in conservation efforts, this will be associated with a significant increase in likelihood that we won’t experience catastrophic climate change? Absolutely not. Those conservation efforts I engage in are almost entirely for the purpose of signalling credibility to other environmentalists (I say “other” but it’s difficult to find anyone who identifies as an environmentalist who shares my outlook,) and I am completely aware of this. However, the utility cost of informing oneself about a potential tragedy of commons where the information is readily available and heavily promoted, at least to a basic level, is extremely low, and humans have a record of resolving some types of tragedies of commons (although certainly not all,) and the more people who’re aware of and care about the issue, the greater the chance of the population resolving it (they practically never will of their own volition, but they will be more likely to support leaders who take it seriously and not defect from policies that address it and so on.) The expected utility has to be very low to not overcome the minimal cost of informing themselves. And of course, you have to include the signalling value of being informed (when you’re informed you can still signal that you don’t think the issue is a good time investment, or significant at all, but you can also signal your familiarity.)
I think that those environmentalists who expect that by public campaigning and grassroots action we can reduce climate change to manageable levels are being unreasonably naive; they’re trying to get the best results they can with their pocket change when what they need is three times their life savings. Sufficient levels of cooperation could resolve the matter simply enough, but people simply don’t work like that. To completely overextend the warfare metaphor, I think that if we’ve got a shot of not facing catastrophe, it’s not going to look like everyone pulling together and giving their best efforts to fight the barbarians, it’s going to look more like someone coming forward and saying “If you put me in charge I’m going to collect some resources from all of you, and we’re going to use them to make a nuclear bomb according to this plan these guys worked out.” Whether the society succeeds or not will hinge on proliferation of information and how seriously the public takes the issue, whether they mostly say things like “sounds like a good plan, I’m in,” and “if it’s really our best chance,” rather than “I care more about the issues on Leader B’s platform” and “I have no idea what that even means.”
I would say that it’s considerably more important for everyday life for most people than knowing whether tomatoes have genes.
What I think you should be arguing here (and what on one level I think you where implicitly arguing), is that in a sufficiently high trust society one should spend more resources on educating people about global warming than tomatoes having genes if one wants to help them.
It is for their own good, but not their personal good. Like a vaccine shot that has a high rate of nasty side effects but helps keep an infectious disease at bay. If you care about them, it can be rational to take the shot yourself if that’s an effective signal to them that you aren’t trying to fool them. By default they will be modelling you like one of them and interpret your actions accordingly. Likewise if you just happen to be better enough at deceit than they will fail detecting it, you can still use that signal to help them, even if take a fake shot.
Humans are often predictably irrational. The arational processes that maintain the high trust equilibrium can be used to let you take withdrawals of cooperative behaviour from the bank when the rational incentives just aren’t there. What game theory is good for in this case is realizing how much you are withdrawing, since a rational game theory savvy agent is a pretty good benchmark for some cost analysis. You naturally need to think about the cost to quickly gauge if the level of trust is high enough in a society and further more if you burden it in this way, is the equilibrium still stable in the midterm?
Also, it’s already having a fairly substantial effect on polar communities in the US, Canada and Russia, making it difficult to obtain enough food. Many of them are impoverished in the context of the national economy and still whaling-dependant in large part for enough food to survive. Any disruption is a direct threat to food availability.
I’m not sure how that’s a response to what I said. Electing a president who opts to start a nuclear war would obviously be a political issue, and might have even worse effects on humans.
You said it’s not an important issue for everyday life.
Things that significantly impact health (how often are you exposed to pathogens and how severe are they?), weather (makes a big difference even for an urban-living person with access to climate-controlled dwelling like me in the Midwest), the availability of food and water (which you need for not dying), and the stability of where you live (loss of which compromises all the others and requires you to try to find somewhere else and see what happens there) seem like the very definition of important to everyday life.
What I meant was that knowing stuff about the issue isn’t important for everyday life. While the availability of food and water is good to know about, what environmental conditions caused it is less important unless I’m a farmer or policy-maker.
Similarly, a nuclear war would impact health, weather, and the availability of food and water, but I am much better off worrying about whether my car needs an oil change than worrying about whether my government is going to start a nuclear war.
I can sort of agree, insofar as I can’t myself direct the government to never under any circumstances actually do that, and I can’t sequester an Industrial Revolution’s worth of CO2 just by being aware of the problem, but I feel like it misses something. Not everyone is going to be equally unable to meaningfully contribute to solving the problem—if a high baseline level of meaningful-awareness of an issuye is the norm, it seems like society is more likely to get the benefits of “herd immunity” to that failure mode. It’s not guaranteed, I wouldn’t call it a sufficient condition by any means for solving the problem, but it’s increasing the odds that any given person whose potential influence over the activities of society is great might be better prepared to respond to that in a way that’s not terribly productive.
I suppose if you think we’ll get FAI soon, this is irrelevant—it’s a whole lot less efficient and theoretically stable a state than some superintelligence just solving the problem in a way that makes it a nonissue and doesn’t rely on corruptible, fickle, perversely-incentivized human social patterns. I’m not so sanguine about that possibility m’self, although I’d love to be wrong.
EDIT: I guess what I’m saying is, why would you NOT want information about something that might be averted or mitigated, but whose likely consequences are a severe change to your current quality of life?
EDIT: I guess what I’m saying is, why would you NOT want information about something that might be averted or mitigated, but whose likely consequences are a severe change to your current quality of life?
I want all information about all things. But I don’t have time for that. And given the option of learning to spot global warming or learning to spot an unsafe tire on my car, I’ll have to pick #2. WLOG.
Even if it turns out that you can leverage the ability to spot global warming into enough money to pay your next-door neighbor to look at your car tires every morning and let you know if they’re unsafe?
This implies the Maldiveans should not concern themselves with the fact that their entire country (most of it no more than a meter or two above sea level) is projected to lose most of the land sustaining its tourist industry (the main economic engine for the economy) and displacing most of its population, leading to greater social volatility in a country enjoying a relative respite after many years of internicine violence.
I want all information about all things. But I don’t have time for that.
If you think this applies to the question of whether or not it’s valuable for anyone to know about global warming in concrete terms and the plausible implications for their own lives, then I can only say that I hope for your sake you live somewhere nicely insulated from such possible changes. Me, I’d rather treat it the way I treat earthquakes where I’m from or tornadoes where I live now: things worth knowing about and being at least somewhat prepared for if it’s at all possible to do so.
I would be unsurprised by zero familiarity in a random sampling of the population, but I would have expected a greater degree of familiarity here as a matter of general scientific literacy.
Stop being astonished so easily. How much familiarity with climate science do you expect the average non-climate scientist to actually have?
I suspect that people displaying >95% certainty about AGW aren’t much more “familiar with the data” than the people who display less certainty—that their most significant difference is that they put more trust on what is a political position in the USA.
But I doubt you question the “familiarity with the data” of the people who are very very certain of your preferred position.
The average LessWronger is almost certainly much more competent to evaluate that global temperatures have been rising significantly, and that at least one human behavior has had a nontrivial effect on this change in temperature, than to evaluate that all life on earth shares a common ancestral gene pool, or that some 13.75 billion years ago the universe began rapidly inflating. Yet I suspect that the modern evolutionary synthesis (including its common-descent thesis), and the Big Bang Theory, are believed more strongly by LessWrongers than is anthropogenic climate change.
If so, then it can’t purely be a matter of LessWrongers’ lack of expertise in climate science; there must be some sociological factors undermining LessWrongers’ confidence in some scientific claims they have to largely take scientists’ word for, while not undermining LessWrongers’ confidence in all scientific claims they have to largely take scientists’ word for.
Plausibly, the ongoing large-scale scientific misinformation campaign by established economic and political interests is having a big impact. Merely hearing about disagreement, even if you have an excellent human-affairs model predicting such disagreement in the absence of any legitimate scientific controversy, will for psychological reasons inevitably shake a generic onlooker’s confidence. Listen to convinced and articulate flat-earth skeptics long enough, and some measure of psychological doubt is inevitable, even if you are savvy enough to avoid letting this doubt creep into your more careful and reflective probability calculations.
The average LessWronger is almost certainly much more competent to evaluate [anthropogenic global warning] than [universal common descent or Big Bang cosmology]
I agree that they are likely at least competent about the former than the latter, but why do you think they are almost certainly much more competent?
Evaluating common descent requires evaluating the morphology, genome, and reproductive behavior of every extremely distinctive group of species, or of a great many. You don’t need to look at each individual species, but you at least need to rule out convergent evolution and (late) lateral gene transfer as adequate explanations of homology. (And, OK, aliens.) How many LessWrongers have put in that legwork?
Evaluating the age of the universe requires at least a healthy understanding of contemporary physics in general, and of cosmology. The difficulty isn’t just understanding why people think the universe is that old, but having a general enough understanding to independently conclude that alternative models are not correct.
That’s a very basic sketch of why I’d be surprised if LessWrongers could better justify those two claims than the mere claim that global temperatures have been rising (which has been in the news a fair amount, and can be confirmed in a few seconds on the Internet) and a decent assessment of the plausibility of carbon emissions as a physical mechanism. Some scientific knowledge will be required, but not of the holistic ‘almost all of biology’ or ‘almost all of physics’ sort indicated above, I believe.
I think you’re seriously failing to apply the Principle of Charity here. Do you think I assume that anyone who claims to “believe in the theory of evolution” understands it well?
RobbBB has already summed up why the levels of certainty shown in this survey would be anomalous when looked at purely from an “awareness of information” perspective, which is why I think that it would be pretty astonishing if lack of information were actually responsible.
AGW is a highly politicized issue, but then, so is evolution, and the controversy on evolution isn’t reflected among the membership of Less Wrong, because people aligned with the bundle of political beliefs which are opposed to it are barely represented here. I would not have predicted in advance that such a level of controversy on AGW would be reflected among the population of Less Wrong.
while anthropogenic global warming doesn’t yet have the same sort of degree of evidence as, say, evolution, I think that an assignment of about 70% probability represents either critical underconfidence or astonishingly low levels of familiarity with the data.
ArisKatsaris said:
I suspect that people displaying >95% certainty about AGW aren’t much more “familiar with the data” than the people who display less certainty
The problem with these arguments is that you need to 1. know the data 2. know how other people would interpret it , because with just 1. you’ll end up comparing your probability assignments with others’, and might perhaps mistake into thinking that their deviation from your estimation is due to lack of access to the data and/or understanding over it...… ….....unless you’re comparing it to what your idea of some consensus is.
...Meanwhile I don’t know either so just making a superficial observation, while not knowing which one of you knows which things here.
Perhaps they also want to signal a sentiment similar to that of Freeman Dyson:
I believe global warming is grossly exaggerated as a problem. It’s a real problem, but it’s nothing like as serious as people are led to believe. The idea that global warming is the most important problem facing the world is total nonsense and is doing a lot of harm. It distracts people’s attention from much more serious problems.
“Significant” is just too vague. Everyone who gave an answer was answering a different question, depending on how they interpreted “significant”.
The survey question itself indicates a primary problem with the discussion of global warming—a conflation of temperature rise and societal cost of temperature rise. First, ask a meaningful question about temperature increase. Then, ask questions about societal cost given different levels of temperature increase.
It seems to be, possibly related to the Libertarian core cluster from OB. In my experience US Libertarians are especially likely to disbelieve in anthropogenic global warming, or to argue it’s not anthropogenic, not potentially harmful, or at least not grounds for serious concern at a public policy level.
I’m rather shocked that the numbers on this are so low. It’s higher than polls indicate as the degree of acceptance in America, but then, we’re dealing with a public where supposedly half of the people believe that tomatoes only have genes if they are genetically modified. Is this a subject on which Less Wrongers are significantly meta-contrarian?
I’m also a bit surprised (I would have excepted high figures), but be careful to not misinterpret the data : it doesn’t say that 70.7% of LWers believe in “anthropogenic global warming”, but it does an average on probabilities. If you look at the quarters, even the 25% quarter is at p = 55% meaning that less than 25% of LWers give a lower than half probability.
It seems to indicate that almost all LWers believe in it being true (p>0.5 that it is true), but many of them do so with a low confidence. Either because they didn’t study the field enough (and therefore, refuse to put too much strength in their belief) or because they consider the field too complicated/not well enough understood to be a too strong probability in it.
That’s how I interpreted it in the first place; “believe in anthropogenic global warming” is a much more nebulous proposition anyway. But while anthropogenic global warming doesn’t yet have the same sort of degree of evidence as, say, evolution, I think that an assignment of about 70% probability represents either critical underconfidence or astonishingly low levels of familiarity with the data.
It doesn’t astonish me. It’s not a terribly important issue for everyday life; it’s basically a political issue.
I think I answered somewhere around 70%; while I’ve read a bit about it, there are plenty of dissenters and the proposition was a bit vague.
The claim that changing the makeup of the atmosphere in some way will affect climate in some way is trivially true; a more specific claim requires detailed study.
I would say that it’s considerably more important for everyday life for most people than knowing whether tomatoes have genes.
Climate change may not represent a major human existential risk, but while the discussion has become highly politicized, the question of whether humans are causing large scale changes in global climate is by no means simply a political question.
If the Blues believe that asteroid strikes represent a credible threat to our civilization, and the Greens believe they don’t, the question of how great a danger asteroid strikes actually pose will remain a scientific matter with direct bearing on survival.
I disagree actually.
For most people neither global warming nor tomatoes having genes matters much. But if I had to choose, I’d say knowing a thing or two about basic biology has some impact on how you make your choices with regards to say healthcare or how much you spend on groceries or what your future shock level is.
Global warming, even if it does have a big impact on your life will not be much affected by you knowing anything about it. Pretty much anything an individual could do against it has a very small impact on how global warming will turn out. Saving 50$ a month or a small improvement in the odds of choosing the better treatment has a pretty measurable impact on him.
Taking global warming as a major threat for now (full disclosure: I think global warming, is not a threat to human survival though it may contribute to societal collapse in a worst case scenario), it is quite obviously a tragedy of the commons problem.
There is no incentive for an individual to do anything about it or even know anything about it, except to conform to a “low carbon footprint is high status” meme in order to derive benefit in his social life and feeling morally superior to others.
You don’t need to know whether a tomato has genes to know who has a reputation as a good doctor and to do what they say. It might effect your buying habits it you believe that eating genes is bad for you, but it’s entirely probable that a person will make their healthcare and shopping decisions without any reference to the belief at all.
As I just said to xv15, in a tragedy of the commons situation, you either want to conserve if you think enough people are running a sufficiently similar decision algorithm, or you want a policy of conservation in place. The rationalist doesn’t want to fight the barbarians, but they’d rather that they and everyone else on their side be forced to fight.
So one should just start fighting and hope others follow? Why not just be a hypocrite, we humans are good at it, that way you can promote the social norm you wish to inspire with a much smaller cost!
It comes out much better after cost benefit analysis. Yay rationality! :D
Why bother to learn what global warming if it suffices for you to know it is a buzzword that makes the hybrid car you are going to buy more trendy than your neighbours pick up truck or your old Toyota (while ignoring the fact that a car has already left most of its carbon footprint by the time its rolled off the assembly line and delivered to you)?
If you’re in a population of similar agents, you should expect other people to be doing the same thing, and you’ll be a lot more likely to lose the war than if you actually fight. And if you’re not in a population where you can rely on other people choosing similarly, you want a policy that will effectively force everyone to fight. Any action that “promotes the social norm” but does not really enforce the behavior may be good signaling within the community, but will be useless with respect to not getting killed by barbarians.
A person who only believes in the signalling value of green technologies (hybrids are trendy) does not want a social policy mandating green behavior (the behaviors would lose their signaling value.)
A social policy mandating a behavior that is typical of a subgroup shows that that subgroup wields a lot of political power and thus gives it higher status—those pesky blues will have to learn who’s the boss! Hence headscarves forbidden or compulsory, recycling, “in God we trust” as a motto, etc.
Hey! That’s my team. How dare you!
I am familiar with the argument, it just happens to be I don’t this this is so, at least not when it comes to coordination on global warming. I should have made that explicit though.
I don’t think you grok how hypocrisy works. By promoting the social norms I don’t follow I make life harder for less skilled hypocrites. The harder life gets for them, the more of them should switch to just following the norms, if that happens to be cheaper.
Sufficiently skilled hypocrites are the enforcers of societal norms.
Also where does this strange idea of a norm not being really enforced come from? Of course it is! The idea that anything worthy of the name social norm isn’t really enforced is an idea that’s currently popular but obviously silly, mostly since it allows us to score status points by pretending to be violating long dead taboos.
The mention of hypocrisy seems to have immediately jumped a few lanes and landed in “dosen’t really enforce”. Ever heard of a double standard? No human society ever has worked without a few. It is perfectly possible to be a mean lean norm enforcing machine and not follow them.
He may not want its universal or near universal adoption (lets leave aside if its legislated or not) but it is unavoidable. That’s just how fashion works, absent material constraints it drifts downwards. And since most past, present and probably most future societies are not middle class dominated societies, one can easily argue that lower classes embracing ecological conspicuousness might do more good. Consuming products based on how they judge it (mass consumption is still what drives the economy) and voting on it (since votes are signalling affiliation in the first approximation).
Also at the end of the day well of people still often cooperate on measures such as mandatory school uniforms.
I don’t believe it’s so either. I think that even assuming they believed global warming was a real threat, much or most of the population would not choose to carry their share of the communal burden. This is the sort of situation where you want an enforced policy ensuring cooperation.
In places where rule of law breaks down, a lot of people engage in actions such as looting, but they still generally prefer to live in societies where rules against that sort of thing are enforced.
In places where people are not much like you, where people don’t know you well (or there are other factors making hypocrisy relatively easy to get away with) you shouldn’t bother promoting costly norms by actually following them.
You probably get more expected utility if you are a hypocrite in such circumstances.
That’s true. But it’s still to your advantage to be in a society where rules promoting the norm are enforced. If you’re in a society which doesn’t have that degree of cohesiveness and is to averse to enforcing cooperation, then you don’t want to fight the barbarians, you want to stay at home and get killed later. This is a society you really don’t want to be in though; things have to be pretty hopeless before it’s no longer in your interest to promote a policy of cooperation.
This actually makes more sense if you reverse it! Promoting costly norms by following them yourself regardless of the behavior of others only becomes the best policy when the consequences of that norm not being followed is dire!
When I say “things have to be pretty hopeless,” I mean that the prospects for amelioration are low, not that the consequences are dire. Assuming the consequences are severe, taking costly norms on oneself to prevent it makes sense unless the chances of it working are very low.
To avoid slipping into “arguments as soldiers” mode, I just wanted to state that I do think environmental related tragedy of the commons are a big problem for us (come on the trope namer is basically one!) and we should devote resources to attempt to solve or ameliorate them.
I, on the other hand, find myself among environmentalists thinking that the collective actions they’re promoting mostly have negative individual marginal utility. But I think that acquiring the basic information has positive individual marginal utility (I personally suspect that the most effective solutions to climate change are not ones that hinge on grassroots conservation efforts, but effective solutions will require people to be aware of the problem and take it seriously.)
Wait a sec. Global warming can be important for everyday life without it being important that any given individual know about it for everyday life. In the same way that matters of politics have tremendous bearing on our lives, yet the average person might rationally be ignorant about politics since he can’t have any real effect on politics. I think that’s the spirit in which thomblake means it’s a political matter. For most of us, the earth will get warmer or it won’t, and it doesn’t affect how much we are willing to pay for tomatoes at the grocery store (and therefore it doesn’t change our decision rule for how to buy tomatoes), although it may effect how much tomatoes cost.
(It’s a bit silly, but on the other hand I imagine one could have their preferences for tomatoes depend on whether tomatoes had “genes” or not.)
This is a bit like the distinction between microeconomics and macroeconomics. Macroeconomics is the stuff of front page newspaper articles about the economy, really very important stuff. But if you had to take just one economics class, I would recommend micro, because it gives you a way of thinking about choices in your daily life, as opposed to stuff you can’t have any real effect on.
You don’t have much influence on an election if you vote, but the system stops working if everyone acts only according to the expected value of their individual contribution.
This is isomorphic to the tragedy of the commons, like the ‘rationalists’ who lose the war against the barbarians because none of them wants to fight.
Exactly, it IS the tragedy of the commons, but that supports my point, not yours. It may be good for society if people are more informed about global warming, but society isn’t what makes decisions. Individuals make decisions, and it’s not in the average individual’s interest to expend valuable resources learning more about global warming if it’s going to have no real effect on the quality of their own life.
Whether you think it’s an individual’s “job” or not to do what’s socially optimal, is completely besides the point here. The fact is they don’t. I happen to think that’s pretty reasonable, but it doesn’t matter how we wish people would behave, in order to predict how they will behave.
Let me try to be clear, since you might be wondering why someone (not me) downvoted you: You started by noting your shock that people aren’t that informed about global warming. I said we shouldn’t necessarily be surprised that they aren’t that informed about global warming. You responded that we’re suffering from the tragedy of the commons, or the tragedy of the rationalists versus the barbarians. I respond that I agree with what you say but not with what you seem to think it means. When we unearth a tragedy of the commons, we don’t go, “Aha! These people have fallen into a trap and if they saw the light, they would know to avoid it!” Casting light on the tragedy of the commons does not make it optimal for individuals to avoid it.
Casting light on the commons is a way of explaining why people would be behaving in such a socially suboptimal way, not a way of bolstering our shock over their behavior.
In a tragedy of the commons, it’s in everybody’s best interests for everybody to conserve resources. If you’re running TDT in a population with similar agents, you want to conserve, and if you’re in a population of insufficiently similar agents, you want an enforced policy of conservation. The rationalist in a war with the barbarians might not want to fight, but because they don’t want to lose even more, they will fight if they think that enough other people are running a similar decision algorithm, and they will support a social policy that forces them and everyone else to fight. If they think that their side can beat the barbarians with a minimal commitment of their forces, they won’t choose either of these things.
And this is why xv15 is right and Desrtopa is wrong. Orther people do not run TDT or anything similar. Individuals who cooperate with such a population are fools.
TDT is NOT a magic excuse for cooperation. It calls for cooperation in cases when CDT does not only when highly specific criteria are met.
At the Paris meetup Yvain proposed that voting might be rational for TDT-ish reasons, to which I replied that if you have voted for losing candidates at past elections, that means not enough voters are correlated with you. Though now that I think of it, maybe the increased TDT-ish impact of your decision could outweigh the usual arguments against voting, because they weren’t very strong to begin with.
But sometimes it works out anyway. Lots of people can be fools. And lots of people can dislike those who aren’t fools.
People often think “well if everyone did X sufficiently unpleasant thing would happen, therefore I won’t do it”. They also implicitly believe, though they may not state “most people are like me in this regard”. They will also say with their facial expressions and actions though not words “people who argue against this are mean and selfish”.
In other words I just described a high trust society. I’m actually pretty sure if you live in Switzerland you could successfully cooperate with the Swiss on global warming for example. Too bad global warming isn’t just a Swiss problem.
Compliance with norms so as to avoid punishment is a whole different issue. And obviously if you willfully defy the will of the tribe when you know that the punishment exceeds the benefit to yourself then you are the fool and the compliant guy is not.
Of course they will. That’s why we invented lying! I’m in agreement with all you’ve been saying about hypocrisy in the surrounding context.
I agree. Desrtopa is taking Eliezer’s barbarians post too far for a number of reasons.
1) Eliezer’s decision theory is at the least controversial which means many people here may not agree with it.
2) Even if they agree with it, it doesn’t mean they have attained rationality in Eliezer’s sense.
3) Even if they have attained this sort of rationality, we are but a small community, and the rest of the world is still not going to cooperate with us. Our attempts to cooperate with them will be impotent.
Desrtopa: Just because it upholds an ideal of rationality that supports cooperation, does not mean we have attained that ideal. Again, the question is not what you’d like to be true, but about what’s actually true. If you’re still shocked by people’s low confidence in global warming, it’s time to consider the possibility that your model of the world—one in which people are running around executing TDT—is wrong.
Those are all good reasons but as far as I can tell Desrtopa would probably give the right answer if questioned about any of those. He seems to be aware of how people actually behave (not remotely TDTish) but this gets overridden by a flashing neon light saying “Rah Cooperation!”.
There are plenty of ways in which I personally avoid cooperation for my own benefit. But in general I think that a personal policy of not informing oneself at even a basic level about tragedies of commons where the information is readily available is not beneficial, because humans have a sufficiently developed propensity for resolving tragedies of commons to give at least the most basic information marginal benefit.
To me, this comment basically concedes that you’re wrong but attempts to disguise it in a face-saving way. If you could have said that people should be informing themselves at the socially optimal level, as you’ve been implying with your TDT arguments above, you would have. Instead, you backed off and said that people ought to be informing themselves at least a little.
Just to be sure, let me rewrite your claim precisely, in the sense you must mean it given your supposed continued disagreement:
Assuming that’s what you’re saying, it’s easy to see that even this is an overreach. The question on the table is whether people should be informing themselves about global warming. Whether the first epsilon of information one gets from “informing oneself” (as opposed to hearing the background noise) is beneficial to the individual relative to the cost of attaining it, is a question of derivatives of cost and benefit functions at zero, and it could go either way. You simply can’t make a general statement about how these derivatives relate for the class of Commons Problems. But more importantly, even if you could, SO WHAT? The question is not whether people should be informing themselves a bit, the question is whether they should be informing themselves at anywhere close to the socially optimal level. And by admitting it’s a tragedy of the commons, we are already ANSWERING that question.
Does that make sense? Am I misunderstanding your position? Has your position changed?
It seems that you are trying to score points for winning the debate. If your interlocutor indeed condedes something in a face-saving way, forcing him to admit it is useless from the truth-seeking point of view.
prase, I really sympathize with that comment. I will be the first to admit that forcing people to concede their incorrectness is typically not the best way of getting them to agree on the truth. See for example this comment.
BUT! On this site we sort of have TWO goals when we argue, truth-seeking and meta-truth-seeking. Yes, we are trying to get closer to the truth on particular topics. But we’re also trying to make ourselves better at arguing and reasoning in general. We are trying to step back and notice what we’re doing, and correct flaws when they are exposed to our scrutiny.
If you look back over this debate, you will see me at several points deliberately stepping back and trying to be extremely clear about what I think is transpiring in the debate itself. I think that’s worth doing, on lesswrong.
To defend the particular sentence you quote: I know that when I was younger, it was entirely possible for me to “escape” from a debate in a face-saving way without realizing I had actually been wrong. I’m sure this still happens from time to time...and I want to know if it’s happening! I hope that LWers will point it out. On LW I think we ought to prioritize killing biases over saving faces.
The key question is: would you believe it if it were your opponent in a heated debate who told you?
I’d like to say yes, but I don’t really know. Am I way off-base here?
Probably the most realistic answer is that I would sometimes believe it, and sometimes not. If not often enough, it’s not worth it. It’s too bad there aren’t more people weighing in on these comments because I’d like to know how the community thinks my priorities should be set. In any case you’ve been around for longer so you probably know better than I.
I think we are speaking about this scenario:
Alice says: “X is true.”
Bob: “No, X is false, because of Z.”
Alice: “But Z is irrelevant with respect to X’, which is what I actually mean.”
Now, Bob agrees with X’. What will Bob say?
“Fine, we agree after all.”
“Yes, but remember that X is problematic and not entirely equivalent to X’.”
“You should openly admit that you were wrong with X.”
If I were in place of Alice, (1) would cause me to abandon X and believe X’ instead. For some time I would deny that they aren’t equivalent or think that my saying X was only poor formulation on my part and that I have always believed X’. Later, when I cease identifying with my past self too much, I would admit (at least to myself) that I have changed my opinion. (2) would have similar effects, with more resent directed at Bob. In case of (3) I would perhaps try to continue debating to win the lost points back by pointing out weak points of Bob’s opinions or debating style, and after calming down I would believe that Bob is a jerk and search hard to find reasons why Z is a bad argument. Eventually I would (hopefully) move to X’ too (I don’t like to believe things which are easily attacked), but it would take longer. I would certainly not admit my error on the spot.
(The above is based on memories of my reactions in several past debates, especially before I read about cognitive biases and such.)
Now, to tell how generalisable are our personal anecdotes, we should organise an experiment. Do you have any idea how to do it easily?
I think the default is that people change specific opinions more in response to the tactful debate style you’re identifying, but are less likely to ever notice that they have in fact changed their opinion. I think explicitly noticing one’s wrongness on specific issues can be really beneficial in making a person less convinced of their rightness more globally, and therefore more willing to change their mind in general. My question is how we ought to balance these twin goals.
It would be much easier to get at the first effect by experiment than the second, since the latter is a much more long-term investment in noticing one’s biases more generally. And if we could get at both, we would still have to decide how much we care about one versus the other, on LW.
Personally I am becoming inclined to give up the second goal.
Since here on LW changing one’s opinion is considered a supreme virtue, I would even suspect that the long-term users are confabulating that they have changed their opinion when actually they didn’t. Anyway, a technique that might be useful is keeping detailed diaries of what one thinks and review them after few years (or, for that matter, look at what one has written on the internet few years ago). The downside is, of course, that writing beliefs down may make their holders even more entrenched.
Entirely plausible—cognitive dissonance, public commitment, backfire effect, etc. Do you think this possibility negates the value, or are there effective counter-measures?
I don’t think I have an idea how strong all relevant effects and measures are.
There’s a big difference between:
“it’s best if we notice and acknowledge when we’re wrong, and therefore I will do my best to notice and acknowledge when I’m wrong”
“it’s best if we notice and acknowledge when we’re wrong, and therefore I will upvote, praise, and otherwise reinforce such acknowledgements when I notice them”
and
“it’s best if we notice and acknowledge when we’re wrong, and therefore I will downvote, criticize, and otherwise punish failure to do so.”
True in the immediate sense, but I disagree in the global sense that we should encourage face-saving on LW, since doing so will IMO penalize truth-seeking in general. Scoring points for winning the debate is a valid and important mechanism for reinforcing behaviors that lead to debate-winning, and should be allowed in situations where debate-winning correlates to truth-establishment in general, not just for the arguing parties.
This is also true in the immediate sense, but somehow implies that the debate-winning behaviours are a net positive with respect to truth seeking at least in some possible (non-negligibly frequent) circumstances. I find the claim dubious. Can you specify in what circumstances is the debate winning argumentation style superior to leaving a line of retreat?
Line of retreat is superior for convincing your debate partner, but debate-winning behavior may be superior for convincing uninvolved readers, because it encourages verbal admission of fault which makes it easier to discern the prevailing truth as a reader.
That isn’t actually the reason. The reason debate-winning behavior is superior for convincing bystanders is that it appeals to their natural desire to side with the status-gaining triumphant party. As such, it is a species of Dark Art.
This is what I am not sure about. I know that I will be more likely to admit being wrong when I have chance do do it in a face-saving way (this includes simply saying “you are right” when I am doing it voluntarily and the opponent has debated in a civillised way up to that point) than when my interlocutor tries to force me to do that. I know it but still can’t easily get rid of that bias.
There are several outcomes of a debate where one party is right and the other is wrong:
The wrong side admit their wrongness.
The wrong side don’t want to admit their wrongness but realise that they have no good arguments and drop from the debate.
The wrong side don’t want to admit their wrongness and still continue debating in hope of defeating the opponent or at least achieving a honourable draw.
The wrong side don’t even realise their wrongness.
The exact flavour of debate-winning behaviour I have criticised makes 2 difficult or impossible, consequently increasing probabilities of 1, 3 or 4. 1 is superior to 2 from almost any point of view, but 2 is similarly superior to 3 and 4 and it is far from clear whether the probability of 1 increases more than probabilities of 3 and 4 combined when 2 ceases to be an option, or whether it increases at all.
You left off all the cases where the right side admits their wrongess!
Or where both sides admit their wrongness and switch their opinions, or where a third side intervenes and bans them both for trolling. Next time I’ll try to compose a more exhaustive list.
Don’t forget the case where the two parties are talking at cross purposes (e.g. Alice means that a tree falling in a forest with no-one around generates no auditory sensations and Bob means that it does generate acoustic waves) but neither of them realizes that; it doesn’t even occur to each that the other might be meaning something else by sound. (I’m under the impression that this is relatively rare on LW, but it does constitute a sizeable fraction of all arguments I hear elsewhere, both online and in person.)
Well reasoned.
Yes, you are misunderstanding my position. I don’t think that it’s optimal for most individuals to inform themselves about global warming to a “socially optimal” level where everyone takes the issue sufficiently seriously to take grassroots action to resolve it. Human decisionmaking is only isomorphic to TDT in a limited domain and you can only expect so much association between your decisions and others; if you go that far, you’re putting in too much buck for not enough bang, unless you’re getting utility from the information in other ways. But at the point where you don’t have even basic knowledge of global warming, anticipating a negative marginal utility on informing yourself corresponds to a general policy of ignorance that will serve one poorly with respect to a large class of problems.
If there were no correlation between one person’s decisions and another’s, it would probably not be worth anyone’s time to learn about any sort of societal problems at all, but then, we wouldn’t have gotten to the point of being able to have societal problems in the first place.
Unfortunately that response did not convince me that I’m misunderstanding your position.
If people are not using a TDT decision rule, then your original explicit use of TDT reasoning was irrelevant and I don’t know why you would have invoked it at all unless you thought it was actually relevant. And you continue to imply at least a weaker form of that reasoning.
No one is disputing that there is correlation between people’s decisions. The problem is that correlation does not imply that TDT reasoning works! A little bit of correlation does not imply that TDT works a little bit. Unless people are similar to you AND using TDT, you don’t get to magically drag them along with you by choosing to cooperate.
This is a standard textbook tragedy of the commons problem, plain and simple. From where I’m standing I don’t see the relevance of anything else. If you want to continue disagreeing, can you directly tell me whether you think TDT is still relevant and why?
People don’t use a generalized form of TDT, but human decisionmaking is isomorphic to TDT in some domains. Other people don’t have to consciously be using TDT to sometimes make decisions based on a judgment of how likely it is that other people will behave similarly.
Tragedies of commons are not universally unresolvable. It’s to everyone’s advantage for everyone to pool their resources for some projects for the public good, but it’s also advantageous for each individual to opt out of contributing their resources. But under the institution of governments, we have sufficient incentives to prevent most people from opting out. Simply saying “It’s a tragedy of the commons problem” doesn’t mean there’s no chance of resolving it and therefore no use in knowing about it.
Maybe it would help if you gave me an example of what you have in mind here.
Well, take Stop Voting For Nincompoops, for example. If you were to just spontaneously decide “I’m going to vote for the candidate I really think best represents my principles in hope that that has a positive effect on the electoral process,” you have no business being surprised if barely anyone thinks the same thing and the gesture amounts to nothing. But if you read an essay encouraging you to do so, posted in a place where many people apply reasoning processes similar to your own, the choice you make is a lot more likely to reflect the choice a lot of other people are making.
It seems like this is an example of, at best, a domain on which decisionmaking could use TDT. No one is denying that people could use TDT, though. I was hoping for you to demonstrate an example where people actually seem to be behaving in accordance with TDT. (It is not enough to just argue that people reason fairly similarly in certain domains).
“Isomorphic” is a strong word. Let me know if you have a better example.
Anyway let me go back to this from your previous comment:
No one is claiming tragedies of the commons are always unresolvable. We are claiming that unresolved tragedies of the commons are tragedies of the commons! You seem to be suggesting that knowledge is a special thing which enables us to possibly resolve tragedies of the commons and therefore we should seek it out. But in the context of global warming and the current discussion, knowledge-collection is the tragedy of the commons. To the extent that people are underincentivized to seek out knowledge, that is the commons problem we’re talking about.
If you turn around and say, “well they should be seeking out more knowledge because it could potentially resolve the tragedy”...well of course more knowledge could resolve the tragedy of not having enough knowledge, but you have conjured up your “should” from nowhere! The tragedy we’re discussing is what exists after rational individuals decide to gather exactly as much information as a rational agent “should,” where should is defined with respect to that agent’s preferences and the incentives he faces.
Final question: If TDT reasoning did magically get us to the level of informedness on global warming that you think we rationally should be attaining, and if we are not attaining that level of informedness, does that not imply that we aren’t using TDT reasoning? And if other people aren’t using TDT reasoning, does that not imply that it is NOT a good idea for me to start using it? You seem to think that TDT has something to do with how rational agents “should” behave here, but I just don’t see how TDT is relevant.
NO! It implies that you go ahead and use TDT reasoning—which tells you to defect in this case! TDT is not about cooperation!
wedrifid, RIGHT. Sorry, got a little sloppy.
By “TDT reasoning”—I know, I know—I have been meaning Desrtopa’s use of “TDT reasoning,” which seems to be like TDT + [assumption that everyone else is using TDT].
I shouldn’t say that TDT is irrelevant, but really that it is a needless generalization in this context. I meant that Desrtopa’s invocation of TDT was irrelevant, in that it did nothing to fix the commons problem that we were initially discussing without mention of TDT.
Lack of knowledge of global warming isn’t the tragedy of the commons I’m talking about; even if everyone were informed about global warming, it doesn’t necessarily mean we’d resolve it. Humans can suffer from global climate change despite the entire population being informed about it, and we might find a way to resolve it that works despite most of the population being ignorant.
The question a person starting from a position of ignorance about climate change has to answer is “should I expect that learning about this issue has benefits to me in excess of the effort I’ll have to put in to learn about it?” An answer of “no” corresponds to a low general expectation of information value considering the high availability of the information.
The reason I brought up TDT was as an example of reasoning that relies on a correlation between one agent’s choices and another’s. I didn’t claim at any point that people are actually using TDT. However, if decision theory that assumes correlation between people’s decisions did not outcompete decision theory which does not assume any correlation, we wouldn’t have evolved cooperative tendencies in the first place.
Determining that the gesture amounts to less than the gesture of going in to the poll booth and voting for one of the two party lizards seems rather difficult.
Of course, it’s in practice nearly impossible for me to determine through introspection whether what feels like a “spontaneous” decision on my part is in fact being inspired by some set of external stimuli, and if so which stimuli. And without that data, it’s hard to predict the likelihood of other people being similarly inspired.
So I have no business being too surprised if lots of people do think the same thing, either, even if I can’t point to an inspirational essay in a community of similar reasoners as a mechanism.
In other words, sometimes collective shifts in attitude take hold in ways that feel entirely spontaneous (and sometimes inexplicably so) to the participants.
He may be mistaken about how high trust the society he lives in is. This is something it is actually surprisingly easy to be wrong about, since our intuitions aren’t built for a society of hundreds of millions living across an entire continent, our minds don’t understand that our friends, family and co-workers are not a representative sample of the actual “tribe” we are living in.
Even if that is the case he is still mistaken about game theory. While the ‘high trust society’ you describe would encourage cooperation to the extent that hypocrisy does not serve as a substitute the justifications Desrtopa is given are in terms of game theory and TDT. It relies on acting as if other agents are TDT agents when they are not—an entirely different issue to dealing with punishment norms by ‘high trust’ agents.
Sure.
We are in agreement on that. But this might better explain why, on second thought, I think it dosen’t matter, at least not in this sense, on the issue of whether educating people about global warming matters.
I think we may have been arguing against a less than most charitable interpretation of his argument, which I think isn’t that topical a discussion (even if it serves to clear up a few misconceptions). If the less than charitable argument is the interpretation he now thinks or even actually did intend, dosen’t seem that relevant to me.
“rah cooperation” I think in practice translates into “I think I live in a high trust enough society that its useful to use this signal to get people to ameliorate this tragedy of the commons situation I’m concerned about.”
In which case you want an enforced policy conforming to the norm. A rational shepherd in a communal grazing field may not believe that if he doesn’t let his flock overgraze, other shepherds won’t either, but he’ll want a policy punishing or otherwise preventing overgrazers.
Yes, and this means that individuals with the ability to influence or enforce policy about global warming can potentially benefit somewhat from knowing about global warming. For the rest of the people (nearly everyone) knowledge about global warming is of no practical benefit.
If the public doesn’t believe in or care about climate change, then public officials who care about climate change won’t get into power in the first place.
In a society that doesn’t believe the barbarians pose a credible threat, they won’t support a leader who wants to make everyone cooperate in the war.
Again you have jumped back to what benefits society—nobody questions the idea that it is bad for the society if the population doesn’t know Commons_Threat_X. Nobody questions the idea that it is bad for an individual if everyone else doesn’t know about Commons_Threat_X. What you seem unwilling to concede is that there is negligible benefit to that same individual for him, personally learning about Commons_Threat_X (in a population that isn’t remotely made up of TDT agents).
In a population of sufficiently different agents, yes. If you’re in a population where you can rationally say “I don’t know whether this is a credible threat or not, but no matter how credible threat it is, my learning about it and taking it seriously will not be associated with other people learning about it or taking it seriously,” then there’s no benefit in learning about Commons Threat X. If you assume no association of your action with other people’s, there’s no point in learning about any commons threat ever. But people’s actions are sufficiently associated to resolve some tragedies of commons, so in general when people are facing commons problems, it will tend to be to their benefit to make themselves aware of them when the information is made readily available.
Any agent encountering a scenario that can be legitimately described as commons problem with a large population of humans will either defect or be irrational. It really is that simple. Cooperate-Bots are losers.
(Note that agents with actually altruistic preferences are a whole different question.)
Yes. Unless being cooperative makes more Cooperative bots (and not more defecting bots) than defecting makes rational bots (and not Cooperative bots) or used to do so and the vast majority of the population are still cooperative bots.
Evolution has in some specific circumstances made humans cooperate and be collectively better off in situations where rational agents with human values wouldn’t have. That’s the beauty of us being adaptation-executers, not fitness-maximizers.
A rational agent among humans could easily spend his time educating them about global warming, if the returns are high enough (I’m not talking about book revenues or payment for appearances or some irrational philanthropist paying him to do so, I’m actually talking about the returns of ameliorating the negative effects of global warming) and the costs low enough. That’s the interesting version of the debate about it being more “important” people know about global warming than tomatoes having genes.
A rational agent among irrational agents can actually be better off helping them cooperate and coordinate to avoid a specific situation in certain conditions rather than just plain old defecting.
If your reread the sentence you may note that I was careful to make that adjective redundant—sufficiently redundant as to border on absurd. “A rational agent will X or be irrational” is just silly. “A rational agent will X” would have been true but misses the point when talking about humans. That’s why I chose to write “An agent will X or be irrational”.
No. Cooperating is different to being a Cooperate-Bot. A rational agent will cooperate when it will create a better outcome via, for example, making other people cooperate. A Cooperate-Bot will cooperate even when it creates bad outcomes and completely independently of the responses of other agents or their environment. The only situations where it can be expected for it to be better to be a Cooperate-Bot than a rational agent that chooses to cooperate are those contrived scenarios where an entity or the environment is specifically constructed to read the mind and motives of the agent and punish it for cooperating for rational reasons.
I don’t understand why you have gone through my various comments here to argue with trivially true statements. I was under the impression that I mostly joined the conversation agreeing with you.
Yes. When an agent can influence the behavior of other agents and cooperating in order to do so is of sufficient benefit it will cooperate in order to influence others. If this wasn’t the case we wouldn’t bother considering most of the game theoretic scenarios that we construct.
That dosen’t mean they can’t win, as in being the only bots left standing. It is trivially easy to construct such situations. Obviously this won’t help the individuals.
I wasn’t arguing with the statements. I think I even generally affirmed your comments at the start of my comments to avoid confusion. I was just emphasising that while this is settled the argument best version of the argument about the utility of trying to educate other people on global warming probably isn’t.
Also two comments don’t really seem like “going through several of your comments” in my eyes!
Indeed, I obviously didn’t register the sentence properly, edited.
Desrtopa, can we be careful with it means to be “different” from other agents? Without being careful, we might reach for any old intuitive metric. But it’s not enough to be mentally similar to other agents across just any metric. For your reasoning to work, they have to be executing the same decision rule. That’s the metric that matters here.
Suppose we start out identical but NOT reasoning as per TDT—we defect in the prisoner’s dilemma, say—but then you read some LW and modify your decision rule so that when deciding what to do, you imagine that you’re deciding for both of us, since we’re so similar after all. Well, that’s not going to work too well, is it? My behavior isn’t going to change any, since, after all, you can’t actually influence it by your own reading about TDT.
So don’t be so quick to cast your faith in TDT reasoning. Everyone can look very similar in every respect EXCEPT the one that matters, namely whether they are using TDT reasoning.
With this in mind, if you reread the bayesians versus barbarians post you linked to, you should be able to see that it has the feel more of an existence proof of a cooperate-cooperate equilibrium. It does not say that we will necessarily find ourselves in such an equilibrium just by virtue of being sufficiently similar.
Obviously the relevant difference is in their decision metrics. But human decision algorithms, sloppy and inconsistent though they are, are in some significant cases isomorphic to TDT.
If we were both defecting in the Prisoner’s dilemma, and then I read some of the sequences and thought that we were both similar decisionmakers and stopped defecting, it would be transparently stupid if you hadn’t also been exposed to the same information that led me to make the decision in the first place. If I knew you had also read it, I would want to calculate the expected value of defecting or cooperating given the relative utilities of the possible outcomes and the likelihood that your decision would correspond to my own.
I think you’re assuming much sloppier reasoning on my part than is actually the case (of course I probably have a bias in favor of thinking I’m not engaging in sloppy reasoning, but your comment isn’t addressing my actual position.) Do I think that if I engage in conservation efforts, this will be associated with a significant increase in likelihood that we won’t experience catastrophic climate change? Absolutely not. Those conservation efforts I engage in are almost entirely for the purpose of signalling credibility to other environmentalists (I say “other” but it’s difficult to find anyone who identifies as an environmentalist who shares my outlook,) and I am completely aware of this. However, the utility cost of informing oneself about a potential tragedy of commons where the information is readily available and heavily promoted, at least to a basic level, is extremely low, and humans have a record of resolving some types of tragedies of commons (although certainly not all,) and the more people who’re aware of and care about the issue, the greater the chance of the population resolving it (they practically never will of their own volition, but they will be more likely to support leaders who take it seriously and not defect from policies that address it and so on.) The expected utility has to be very low to not overcome the minimal cost of informing themselves. And of course, you have to include the signalling value of being informed (when you’re informed you can still signal that you don’t think the issue is a good time investment, or significant at all, but you can also signal your familiarity.)
I think that those environmentalists who expect that by public campaigning and grassroots action we can reduce climate change to manageable levels are being unreasonably naive; they’re trying to get the best results they can with their pocket change when what they need is three times their life savings. Sufficient levels of cooperation could resolve the matter simply enough, but people simply don’t work like that. To completely overextend the warfare metaphor, I think that if we’ve got a shot of not facing catastrophe, it’s not going to look like everyone pulling together and giving their best efforts to fight the barbarians, it’s going to look more like someone coming forward and saying “If you put me in charge I’m going to collect some resources from all of you, and we’re going to use them to make a nuclear bomb according to this plan these guys worked out.” Whether the society succeeds or not will hinge on proliferation of information and how seriously the public takes the issue, whether they mostly say things like “sounds like a good plan, I’m in,” and “if it’s really our best chance,” rather than “I care more about the issues on Leader B’s platform” and “I have no idea what that even means.”
What I think you should be arguing here (and what on one level I think you where implicitly arguing), is that in a sufficiently high trust society one should spend more resources on educating people about global warming than tomatoes having genes if one wants to help them.
It is for their own good, but not their personal good. Like a vaccine shot that has a high rate of nasty side effects but helps keep an infectious disease at bay. If you care about them, it can be rational to take the shot yourself if that’s an effective signal to them that you aren’t trying to fool them. By default they will be modelling you like one of them and interpret your actions accordingly. Likewise if you just happen to be better enough at deceit than they will fail detecting it, you can still use that signal to help them, even if take a fake shot.
Humans are often predictably irrational. The arational processes that maintain the high trust equilibrium can be used to let you take withdrawals of cooperative behaviour from the bank when the rational incentives just aren’t there. What game theory is good for in this case is realizing how much you are withdrawing, since a rational game theory savvy agent is a pretty good benchmark for some cost analysis. You naturally need to think about the cost to quickly gauge if the level of trust is high enough in a society and further more if you burden it in this way, is the equilibrium still stable in the midterm?
If its not, teach them about tomatoes.
http://en.wikipedia.org/wiki/Effects_of_climate_change_on_humans
Also, it’s already having a fairly substantial effect on polar communities in the US, Canada and Russia, making it difficult to obtain enough food. Many of them are impoverished in the context of the national economy and still whaling-dependant in large part for enough food to survive. Any disruption is a direct threat to food availability.
I’m not sure how that’s a response to what I said. Electing a president who opts to start a nuclear war would obviously be a political issue, and might have even worse effects on humans.
You said it’s not an important issue for everyday life.
Things that significantly impact health (how often are you exposed to pathogens and how severe are they?), weather (makes a big difference even for an urban-living person with access to climate-controlled dwelling like me in the Midwest), the availability of food and water (which you need for not dying), and the stability of where you live (loss of which compromises all the others and requires you to try to find somewhere else and see what happens there) seem like the very definition of important to everyday life.
What I meant was that knowing stuff about the issue isn’t important for everyday life. While the availability of food and water is good to know about, what environmental conditions caused it is less important unless I’m a farmer or policy-maker.
Similarly, a nuclear war would impact health, weather, and the availability of food and water, but I am much better off worrying about whether my car needs an oil change than worrying about whether my government is going to start a nuclear war.
I can sort of agree, insofar as I can’t myself direct the government to never under any circumstances actually do that, and I can’t sequester an Industrial Revolution’s worth of CO2 just by being aware of the problem, but I feel like it misses something. Not everyone is going to be equally unable to meaningfully contribute to solving the problem—if a high baseline level of meaningful-awareness of an issuye is the norm, it seems like society is more likely to get the benefits of “herd immunity” to that failure mode. It’s not guaranteed, I wouldn’t call it a sufficient condition by any means for solving the problem, but it’s increasing the odds that any given person whose potential influence over the activities of society is great might be better prepared to respond to that in a way that’s not terribly productive.
I suppose if you think we’ll get FAI soon, this is irrelevant—it’s a whole lot less efficient and theoretically stable a state than some superintelligence just solving the problem in a way that makes it a nonissue and doesn’t rely on corruptible, fickle, perversely-incentivized human social patterns. I’m not so sanguine about that possibility m’self, although I’d love to be wrong.
EDIT: I guess what I’m saying is, why would you NOT want information about something that might be averted or mitigated, but whose likely consequences are a severe change to your current quality of life?
I want all information about all things. But I don’t have time for that. And given the option of learning to spot global warming or learning to spot an unsafe tire on my car, I’ll have to pick #2. WLOG.
Even if it turns out that you can leverage the ability to spot global warming into enough money to pay your next-door neighbor to look at your car tires every morning and let you know if they’re unsafe?
This implies the Maldiveans should not concern themselves with the fact that their entire country (most of it no more than a meter or two above sea level) is projected to lose most of the land sustaining its tourist industry (the main economic engine for the economy) and displacing most of its population, leading to greater social volatility in a country enjoying a relative respite after many years of internicine violence.
If you think this applies to the question of whether or not it’s valuable for anyone to know about global warming in concrete terms and the plausible implications for their own lives, then I can only say that I hope for your sake you live somewhere nicely insulated from such possible changes. Me, I’d rather treat it the way I treat earthquakes where I’m from or tornadoes where I live now: things worth knowing about and being at least somewhat prepared for if it’s at all possible to do so.
What should astonish about zero familiarity with the data, beyond that there’s a scientific consensus?
I would be unsurprised by zero familiarity in a random sampling of the population, but I would have expected a greater degree of familiarity here as a matter of general scientific literacy.
Stop being astonished so easily. How much familiarity with climate science do you expect the average non-climate scientist to actually have?
I suspect that people displaying >95% certainty about AGW aren’t much more “familiar with the data” than the people who display less certainty—that their most significant difference is that they put more trust on what is a political position in the USA.
But I doubt you question the “familiarity with the data” of the people who are very very certain of your preferred position.
The average LessWronger is almost certainly much more competent to evaluate that global temperatures have been rising significantly, and that at least one human behavior has had a nontrivial effect on this change in temperature, than to evaluate that all life on earth shares a common ancestral gene pool, or that some 13.75 billion years ago the universe began rapidly inflating. Yet I suspect that the modern evolutionary synthesis (including its common-descent thesis), and the Big Bang Theory, are believed more strongly by LessWrongers than is anthropogenic climate change.
If so, then it can’t purely be a matter of LessWrongers’ lack of expertise in climate science; there must be some sociological factors undermining LessWrongers’ confidence in some scientific claims they have to largely take scientists’ word for, while not undermining LessWrongers’ confidence in all scientific claims they have to largely take scientists’ word for.
Plausibly, the ongoing large-scale scientific misinformation campaign by established economic and political interests is having a big impact. Merely hearing about disagreement, even if you have an excellent human-affairs model predicting such disagreement in the absence of any legitimate scientific controversy, will for psychological reasons inevitably shake a generic onlooker’s confidence. Listen to convinced and articulate flat-earth skeptics long enough, and some measure of psychological doubt is inevitable, even if you are savvy enough to avoid letting this doubt creep into your more careful and reflective probability calculations.
I agree that they are likely at least competent about the former than the latter, but why do you think they are almost certainly much more competent?
Evaluating common descent requires evaluating the morphology, genome, and reproductive behavior of every extremely distinctive group of species, or of a great many. You don’t need to look at each individual species, but you at least need to rule out convergent evolution and (late) lateral gene transfer as adequate explanations of homology. (And, OK, aliens.) How many LessWrongers have put in that legwork?
Evaluating the age of the universe requires at least a healthy understanding of contemporary physics in general, and of cosmology. The difficulty isn’t just understanding why people think the universe is that old, but having a general enough understanding to independently conclude that alternative models are not correct.
That’s a very basic sketch of why I’d be surprised if LessWrongers could better justify those two claims than the mere claim that global temperatures have been rising (which has been in the news a fair amount, and can be confirmed in a few seconds on the Internet) and a decent assessment of the plausibility of carbon emissions as a physical mechanism. Some scientific knowledge will be required, but not of the holistic ‘almost all of biology’ or ‘almost all of physics’ sort indicated above, I believe.
I think you’re seriously failing to apply the Principle of Charity here. Do you think I assume that anyone who claims to “believe in the theory of evolution” understands it well?
RobbBB has already summed up why the levels of certainty shown in this survey would be anomalous when looked at purely from an “awareness of information” perspective, which is why I think that it would be pretty astonishing if lack of information were actually responsible.
AGW is a highly politicized issue, but then, so is evolution, and the controversy on evolution isn’t reflected among the membership of Less Wrong, because people aligned with the bundle of political beliefs which are opposed to it are barely represented here. I would not have predicted in advance that such a level of controversy on AGW would be reflected among the population of Less Wrong.
Desrtopa said:
ArisKatsaris said:
The problem with these arguments is that you need to 1. know the data 2. know how other people would interpret it , because with just 1. you’ll end up comparing your probability assignments with others’, and might perhaps mistake into thinking that their deviation from your estimation is due to lack of access to the data and/or understanding over it...… ….....unless you’re comparing it to what your idea of some consensus is.
...Meanwhile I don’t know either so just making a superficial observation, while not knowing which one of you knows which things here.
Perhaps they also want to signal a sentiment similar to that of Freeman Dyson:
That gets to the issue I had with the question.
“Significant” is just too vague. Everyone who gave an answer was answering a different question, depending on how they interpreted “significant”.
The survey question itself indicates a primary problem with the discussion of global warming—a conflation of temperature rise and societal cost of temperature rise. First, ask a meaningful question about temperature increase. Then, ask questions about societal cost given different levels of temperature increase.
It seems to be, possibly related to the Libertarian core cluster from OB. In my experience US Libertarians are especially likely to disbelieve in anthropogenic global warming, or to argue it’s not anthropogenic, not potentially harmful, or at least not grounds for serious concern at a public policy level.