...seems to be all about global warming. I rate that as a top dud cause—but there is a lot of noise—and thus money, fame, etc—associated with it—so obviously it will attract those interested in such things.
If someone tells you they are trying to save the planet, you should normally treat that with considerable scepticism. People like to associate themselves with grand causes for reasons that apparently have a lot to do with social signalling and status—and very little to do with the world actually being at risk.
If someone tells you they are trying to save the planet, you should normally treat that with considerable scepticism.
Surely the skepticism should be directed toward the question of whether their recipe actually does save the world, rather than against their motivation. I don’t think that an analysis of motivations for something like this even begins to pay any rent.
For me, this is a standard technique. Whenever someone tells me how altruistic they are or have been, I try and figure out which replicators are likely to be involved in the display. It often makes a difference whether someone’s brain has been hijacked by memes—whether they are signalling their status to prospective business partners, their wealth to prospective mates—or whatever.
For example, if they are attempting to infect me with the same memes that have hijacked their own brain, my memetic immune system is activated—whereas if they are trying to convince people what a fine individual they are, my reaction is different.
What you said seems fine, but not the reason why you chose to say it in this context, the implied argument. The form of expression makes it hard to argue with. Say it out loud.
This doesn’t address the problem with that particular comment. What you implied is well-known, the problem I pointed out was not that it’s hard to figure out, but that you protected your argument in a weasely form of expression.
It sounds as though you would like to criticise an argument that you think I am implicitly making—but since I never actually made the argument, that gives you an amorphous surface to attack. I don’t plan to do anything to assist with that matter just now—other priorities seem more pressing.
It sounds as though you would like to criticise an argument that you think I am implicitly making—but since I never actually made the argument, that gives you an amorphous surface to attack. I don’t plan to do anything to assist with that matter just now—other priorities seem more pressing.
Yes, that’s exactly the problem. We all should strive to make our arguments easy to attack, errors easy to notice and address. Not having that priority hurts epistemic commons.
My argument was general—I think you want something specific.
However, preparing specific statements tailored to each of the DOOM-promoters involved is a non-trivial task, which would hurt me—by occupying my time with matters of relatively minor significance.
It would be nice if I had time available to devote to such tasks—but in the mean time, I am pretty sure the epistemic commons can get along without my additional input.
However, preparing specific statements tailored to each of the DOOM-promoters involved is a non-trivial task, which would hurt me—by occupying my time with matters of relatively minor significance.
Since significance of the matter is one of the topics under discussion, it can’t be used as an argument.
Edit: But it works as an element of a description of why certain actions take place.
What I mean is that I assign the matter relatively minor significance—so I get on with other things.
Yes, I indeed made a mistake by missing this aspect (factual description of how a belief caused actions as opposed to normative discussion of actions given the question of correctness of the belief).
As a separate matter, I don’t believe the premise is correct (that any additional effort is required to phrase things non-weasely), and thus that the belief in question plays even the explanatory role. But this is also under discussion, so I can’t use that as an argument.
If someone tells you they are trying to save the planet, you should normally treat that with considerable scepticism.
Well, yes, but if someone tells you they are the tallest person in the world, you also should treat that with considerable scepticism. After all, there can only be one person who actually is the tallest person in the world, and it’s unlikely in the extreme that one random guy would be that person. A one-in-six-billion chance is small enough to reject out-of-hand, surely!
The guy looks pretty tall though. How about you get out a tape-measure and then consult the records on height?
“Considerable scepticism” is not an argument against a claim. It is an argument for more evidence. What evidence makes John Baez’s claims that he is trying to save the world more likely to be signalling than a genuine attempt?
If someone I met told me they were the tallest person in the world, I would indeed treat that with considerable scepticism. I would count my knowledge about the 7 billion people in the world as evidence weighing heavily against the claim.
Your 7 billion people are just your prior probability for him being the tallest before you actually examine his size. Once you have seen that he is somewhat tall, you can start developing a better prior:
If he’s taller than any of the people you know that puts him in at least the top three hundredth—so less than 24 million people remain as contenders. If he’s taller than anyone you’ve ever seen, that puts him in at least the top two thousandth—so less than 3.5 million of that 7 billion are actually potential evidence he’s wrong.
So now our prior is 1 in 3.5 million. Now it’s time to look for evidence. At this point, the number of people in the world is irrelevant: it’s already been factored into the equation. What evidence can we use to find our posterior probability?
A cool thing about Bayesian reasoning is that you can cut extreme numbers down to reasonable sizes with some very cheap and very quick tests. In the case of possible ulterior motives for claiming to be saving the world, you can with some small effort distinguish between the “signalling” and “genuine” hypotheses. What tests—what evidence—should we be looking for here, to spot which one is the case?
http://johncarlosbaez.wordpress.com/
...seems to be all about global warming. I rate that as a top dud cause—but there is a lot of noise—and thus money, fame, etc—associated with it—so obviously it will attract those interested in such things.
If someone tells you they are trying to save the planet, you should normally treat that with considerable scepticism. People like to associate themselves with grand causes for reasons that apparently have a lot to do with social signalling and status—and very little to do with the world actually being at risk.
Some take it too far: http://en.wikipedia.org/wiki/Messiah_complex
Surely the skepticism should be directed toward the question of whether their recipe actually does save the world, rather than against their motivation. I don’t think that an analysis of motivations for something like this even begins to pay any rent.
For me, this is a standard technique. Whenever someone tells me how altruistic they are or have been, I try and figure out which replicators are likely to be involved in the display. It often makes a difference whether someone’s brain has been hijacked by memes—whether they are signalling their status to prospective business partners, their wealth to prospective mates—or whatever.
For example, if they are attempting to infect me with the same memes that have hijacked their own brain, my memetic immune system is activated—whereas if they are trying to convince people what a fine individual they are, my reaction is different.
What you said seems fine, but not the reason why you chose to say it in this context, the implied argument. The form of expression makes it hard to argue with. Say it out loud.
There is more from me on the topic in my “DOOM!” video. Spoken out loud, nontheless ;-)
This doesn’t address the problem with that particular comment. What you implied is well-known, the problem I pointed out was not that it’s hard to figure out, but that you protected your argument in a weasely form of expression.
It sounds as though you would like to criticise an argument that you think I am implicitly making—but since I never actually made the argument, that gives you an amorphous surface to attack. I don’t plan to do anything to assist with that matter just now—other priorities seem more pressing.
Yes, that’s exactly the problem. We all should strive to make our arguments easy to attack, errors easy to notice and address. Not having that priority hurts epistemic commons.
My argument was general—I think you want something specific.
However, preparing specific statements tailored to each of the DOOM-promoters involved is a non-trivial task, which would hurt me—by occupying my time with matters of relatively minor significance.
It would be nice if I had time available to devote to such tasks—but in the mean time, I am pretty sure the epistemic commons can get along without my additional input.
Since significance of the matter is one of the topics under discussion, it can’t be used as an argument.
Edit: But it works as an element of a description of why certain actions take place.
What I mean is that I assign the matter relatively minor significance—so I get on with other things.
I am not out to persuade others whether my analysis is correct—again, I have other things to do than publicly parade an analysis of my priorities.
Maybe my priority analysis is correct. Maybe my priority analysis is wrong. In either case, it is my main reason for not doing such tasks.
Yes, I indeed made a mistake by missing this aspect (factual description of how a belief caused actions as opposed to normative discussion of actions given the question of correctness of the belief).
As a separate matter, I don’t believe the premise is correct (that any additional effort is required to phrase things non-weasely), and thus that the belief in question plays even the explanatory role. But this is also under discussion, so I can’t use that as an argument.
Well, yes, but if someone tells you they are the tallest person in the world, you also should treat that with considerable scepticism. After all, there can only be one person who actually is the tallest person in the world, and it’s unlikely in the extreme that one random guy would be that person. A one-in-six-billion chance is small enough to reject out-of-hand, surely!
The guy looks pretty tall though. How about you get out a tape-measure and then consult the records on height?
“Considerable scepticism” is not an argument against a claim. It is an argument for more evidence. What evidence makes John Baez’s claims that he is trying to save the world more likely to be signalling than a genuine attempt?
If someone I met told me they were the tallest person in the world, I would indeed treat that with considerable scepticism. I would count my knowledge about the 7 billion people in the world as evidence weighing heavily against the claim.
Your 7 billion people are just your prior probability for him being the tallest before you actually examine his size. Once you have seen that he is somewhat tall, you can start developing a better prior:
If he’s taller than any of the people you know that puts him in at least the top three hundredth—so less than 24 million people remain as contenders. If he’s taller than anyone you’ve ever seen, that puts him in at least the top two thousandth—so less than 3.5 million of that 7 billion are actually potential evidence he’s wrong.
So now our prior is 1 in 3.5 million. Now it’s time to look for evidence. At this point, the number of people in the world is irrelevant: it’s already been factored into the equation. What evidence can we use to find our posterior probability?
A cool thing about Bayesian reasoning is that you can cut extreme numbers down to reasonable sizes with some very cheap and very quick tests. In the case of possible ulterior motives for claiming to be saving the world, you can with some small effort distinguish between the “signalling” and “genuine” hypotheses. What tests—what evidence—should we be looking for here, to spot which one is the case?