Um, isn’t the knowledge of many spurious arguments and no strong ones over a period of time weak evidence that no better argument exists (or at least, has currently been discovered?)
I do agree with the second part of your post about argument matching, though. The problem becomes even more serious when it is often not an argument against X from someone who takes the position, but a strawman argument they have been taught by others for the specific purposes of matching up more sophisticated arguments to.
Um, isn’t the knowledge of many spurious arguments and no strong ones over a period of time weak evidence that no better argument exists (or at least, has currently been discovered?)
No, because that assumes that the desire to argue about a proposition is the same among rational and insane people. The situation I observe is just the opposite: There are a large number of propositions and topics that most people are agnostic about or aren’t even interested in, but that religious people spend tremendous effort arguing for (circumcision, defense of Israel) or against (evolution, life extension, abortion, condoms, cryonics, artificial intelligence).
This isn’t confined to religion; it’s a general principle that when some group of people has an extreme viewpoint, they will A) attract lots of people with poor reasoning skills, B) take opinions on otherwise non-controversial opinions based on incorrect beliefs, and C) spend lots of time arguing against things that nobody else spends time arguing against, using arguments based on the very flaws in their beliefs that make them outliers to begin with.
Therefore, there is a large class of controversial issues on which one side has been argued almost exclusively by people whose reasoning is especially corrupt on that particular issue.
I don’t think many religious people spend “tremendous effort” arguing against life extension, cryonics or artificial intelligence. For the vast majority of the population, whether religious or not, these issues simply aren’t prominent enough to think about. To be sure, when religious individuals do think about these, they more often than not seem to come down on the against side (Look at for example computer scientist David Gelernter’s arguing against the possibility of AI). And that may be explainable by general tendencies in religion (especially the level at which religion promotes cached thoughts about the soul and the value of death).
But even that is only true to a limited extent. For example, consider the case of life extension, if we look at Judaism, then some Orthodox ethicists have taken very positive views about life extension. Indeed, my impression is that the Orthodox are more likely to favor life extension than non-Orthodox Jews. My tentative hypothesis for this is that Orthodox Judaism places a very high value on human life and downplays the afterlife at least compared to Christianity and Islam. (Some specific strains of Orthodoxy do emphasize the afterlife a bit more (some chassidic sects for example) ). However Conservative and Reform Judaism have been more directly influenced in by Christian values and therefore have picked up a stronger connection to the Christian values and cached thoughts about death.
I don’t think however that this issue can be exclusively explained by Christianity, since I’ve encountered Muslims, neopagans, Buddhists and Hindus who have similar attitudes. (The neopagans all grew up in Christian cultures so one could say that they were being influenced by that but that doesn’t hold too much ground given how much neopaganism seems to be a reaction against Christianity).
All I mean to say is this: Suppose you say, “100 people have made arguments against proposition X, and all of them were bad arguments; therefore the probability of finding a good argument against X is some (monotonic increasing) function of 1⁄100.”
If X is a proposition that is particularly important to people in cult C because they believe something very strange related to X, and 90 of those 100 arguments were made by people in cult C, then you should believe that the probability of finding a good argument against X is a function of something between 1⁄10 and 1⁄100.
Um, isn’t the knowledge of many spurious arguments and no strong ones over a period of time weak evidence that no better argument exists (or at least, has currently been discovered?)
I do agree with the second part of your post about argument matching, though. The problem becomes even more serious when it is often not an argument against X from someone who takes the position, but a strawman argument they have been taught by others for the specific purposes of matching up more sophisticated arguments to.
Yes. This is discussed well in the comments on What Evidence Filtered Evidence?.
No, because that assumes that the desire to argue about a proposition is the same among rational and insane people. The situation I observe is just the opposite: There are a large number of propositions and topics that most people are agnostic about or aren’t even interested in, but that religious people spend tremendous effort arguing for (circumcision, defense of Israel) or against (evolution, life extension, abortion, condoms, cryonics, artificial intelligence).
This isn’t confined to religion; it’s a general principle that when some group of people has an extreme viewpoint, they will A) attract lots of people with poor reasoning skills, B) take opinions on otherwise non-controversial opinions based on incorrect beliefs, and C) spend lots of time arguing against things that nobody else spends time arguing against, using arguments based on the very flaws in their beliefs that make them outliers to begin with.
Therefore, there is a large class of controversial issues on which one side has been argued almost exclusively by people whose reasoning is especially corrupt on that particular issue.
I don’t think many religious people spend “tremendous effort” arguing against life extension, cryonics or artificial intelligence. For the vast majority of the population, whether religious or not, these issues simply aren’t prominent enough to think about. To be sure, when religious individuals do think about these, they more often than not seem to come down on the against side (Look at for example computer scientist David Gelernter’s arguing against the possibility of AI). And that may be explainable by general tendencies in religion (especially the level at which religion promotes cached thoughts about the soul and the value of death).
But even that is only true to a limited extent. For example, consider the case of life extension, if we look at Judaism, then some Orthodox ethicists have taken very positive views about life extension. Indeed, my impression is that the Orthodox are more likely to favor life extension than non-Orthodox Jews. My tentative hypothesis for this is that Orthodox Judaism places a very high value on human life and downplays the afterlife at least compared to Christianity and Islam. (Some specific strains of Orthodoxy do emphasize the afterlife a bit more (some chassidic sects for example) ). However Conservative and Reform Judaism have been more directly influenced in by Christian values and therefore have picked up a stronger connection to the Christian values and cached thoughts about death.
I don’t think however that this issue can be exclusively explained by Christianity, since I’ve encountered Muslims, neopagans, Buddhists and Hindus who have similar attitudes. (The neopagans all grew up in Christian cultures so one could say that they were being influenced by that but that doesn’t hold too much ground given how much neopaganism seems to be a reaction against Christianity).
All I mean to say is this: Suppose you say, “100 people have made arguments against proposition X, and all of them were bad arguments; therefore the probability of finding a good argument against X is some (monotonic increasing) function of 1⁄100.”
If X is a proposition that is particularly important to people in cult C because they believe something very strange related to X, and 90 of those 100 arguments were made by people in cult C, then you should believe that the probability of finding a good argument against X is a function of something between 1⁄10 and 1⁄100.