I’ve been asked to start a thread in order to continue a debate I started in the comments of an otherwise-unrelated post. I started to write a post on that topic, found myself introducing my work by way of explanation, and then realized that this was a sub-topic all its own which is of substantial relevance to at least one of the replies to my comments in that post—and a much better topic for a first-ever post/thread .
So I’m going to write that introductory post first, and then start another thread specifically on the topic under debate.
I run issuepedia.org, a wiki site largely dedicated to the rational analysis of politics.
As part of that analysis, it covers areas such as logical fallacies (and the larger domain of what I call “rhetorical deceptions” and which LessWrong calls “the dark arts”), history, people, organizations, and any other topics necessary to understand an issue. Coverage of each area generally includes collecting sources (such as news articles, editorials, and blog posts), essential details to provide a quick overview, and usually an attempt to draw some kind of conclusion1 about the topic’s ethical significance based, as much as possible, on the sources collected. (Readers are, of course, free to use the wiki format to offer counterarguments and additional facts/sources.)
I started Issuepedia in 2005, largely in an attempt to understand how Bush could possibly have been re-elected (am I deluded, or is half the country deluded? if the latter, how did this happen?). Most of the content is my writing, as I am better at writing than at community-building, but it is all freely copyable under a CC license. I do not receive any money for my work on the site; it does accept donations, but this fact is not heavily advertised and so far there have been no donors. It does not display advertisements, nor have I advertised it (other than linking to it in contexts where its content seems relevant, such as comments on blog entries). I am considering doing the latter at some point when I have sufficient time to give the project the focus it will need in order to grow successfully.
Rationality and Politics
My main hypothesis2 in starting Issuepedia is that it is, in fact, possible to be rational about politics, to overcome its “mind-killing” qualities—if given sufficient “thinking room” in which to record and work through all the relevant (and often mind-numbing) details involved in most political issues in a public venue where you can “show your work” and others may point out any errors and omissions. I’m trying to use wiki technology as an intelligence-enhancing, bias-overcoming device.
Politics contains issues within issues within issues. Arriving at a rational conclusion about any given issue will often depend on being able to draw reasonable conclusions about a handful of other issues, each of which may have other sub-issues affecting it, and so on.
Keeping track of all of these dependencies, however, is somewhat beyond most people’s native emotional intuition and working memory capacity (including mine). Even when we consciously try to overcome built-in biases (such as allegiance to our perceived “tribes”, unexamined beliefs acquired in childhood, etc.), our hind-brains want to take the fine, complex grain of reality and turn it into a simple good-vs.-bad or us-vs.-them moral map drawn with a blunt magic marker—something we can easily remember and apply.
On the other hand, many issues really do seem to boil down to such a simple narrative, something best stated in quite stark terms. Individuals who are making an effort to be measured and rational often seem to reject out of hand the possibility that such simple, clearcut conclusions could possibly be valid, leading to the opposite bias—a sort of systemic “fallacy of moderation”. This can cause popular acquiescence to beliefs that are essentially wrong, such as the claim that “the Democrats do it too” when pointing out evils committed by the latest generation of Republicans. (Yes, they do it too—but much less often, and much less egregiously overall.)
I propose that there must exist some set of factual information upon which each question ultimately rests, if followed far enough “down”. In other words, if you exhaustively and recursively map out the sub-issues for each issue, you must eventually arrive at an issue which can be resolved by reference to facts known or knowable. If no such point can be reached, then the issue cannot possibly have any real-world significance—because if anyone is in any way affected by the issue, then there is the fact of that dependency which must somehow tie in; the trick is figuring out the actual nature of that dependency.
My approach in issuepedia is to break each major issue down into sub-issues, each of which has its own page for collecting information and analysis on that particular issue, then do the same to each of those issues until each sub-branch (or “rootlet”, if you prefer to stay in-metaphor) has encountered the “bedrock” of questions which can be determined factually. Once the “bedrock” questions have been answered, the issues which rest upon those questions can be resolved, and so on.
Documenting these connections, and the facts upon which they ultimately rest, ideally allows each reader to reconstruct the line of reasoning behind a given conclusion. If they disagree with that conclusion, then the facts and reasoning are available for them to figure out where the error lies—and the wiki format makes it easy for them to post corrections; eventually, all rational parties should be able to reach agreement.
I won’t go so far as to claim that Issuepedia carries out this methodology with any degree of rigor, but it’s what I’m working towards.
I’m also aware that recent studies have shown that many people aren’t influenced by facts once they’ve made up their minds (e.g. here). Since I have many times observed myself change my own opinion3 in response to facts, I am working with the hypothesis that this ability may be a cognitive attribute that some people have and others lack—in much the same way that (apparently) only 32% of the adult population can reason abstractly. If it turns out that I do not, in fact, possess this ability to a satisfactory degree, then finding some way to improve it will become a priority.
Methodology for Determination of Fact
The question of how to methodically go about determining fact—i.e. which assertions may be provisionally treated as true and which should be subjected to further inquiry—came up in the earlier discussion, and is something which I think is ripe for formalization.
flaws in the existing methodologies
Up until now, society has depended upon a sort of organic, slow and inefficient but reasonably thorough vetting of new ideas by a number of processes. Those who are more familiar with this area of study should feel free to note any errors or omissions in my understanding, but here is my list of processes (which I’ll call “epistemic arenas”4) by which we have traditionally arrived at societal truths:
science (the scientific process—the scientific method for performing and documenting experiments, peer-review in publications, the practice of replicating previous experiments, and probably other practices generally considered to be part of “science”)
government: the court system, the legislative sausage factory
social processing: people (especially friends) discussing their views on the ethics of various items large and small
media: newspapers, radio, TV
The flaws in each of these methodologies have become much clearer due to the ease and speed with which they may now be researched because of the Internet. A brief summary:
The scientific process is clearly the best of the lot, but can be gamed and exploited: fake papers with sciencey-looking graphs and formulas (e.g. this) -- sometimes published in fake journals with sciencey-sounding names (e.g. JP&S) or backed by sciencey-sounding institutions (SEPP, SSRC, SPPI, OISM) -- in order to promote ideas which have been soundly defeated by the real scientific process. Lists of hundreds of scientists dissenting from the prevailing view may not, in fact, contain any scientists actually qualified to make an authoritative statement (i.e. one deserving credence without having to hear the actual argument) on the subject, and only gain popular credibility because of the use of the word “scientist”.
On the other hand, legitimate ideas which for some reason are considered taboo sometimes cannot gain entry to this process, and must publish their findings by other means which can look very similar to the methods used to promote illegitimate ideas. How can we tell the difference? We can, but it takes time—thus “a lie can travel around the world while the truth is still putting on its boots” by exploiting these weaknesses.
Bringing the machinery of the scientific process to bear on any given issue is also quite expensive and time-consuming; it can be many years (or decades, in the case of significant new ideas) before enough evidence is gathered to overturn prior assumptions. This fact can be exploited in both directions: important but “inconvenient” new facts can be drowned in a sea of publicity arguing against them, and well-established facts can be taken out politically by denialist “sniping” (repeating well-refuted claims over and over again until more people are familiar with those claims than with the refutations thereof, leading to popular belief that the claims must be true).
Also, because the public is generally unaware of how the scientific process functions, they are likely to give it far less weight than it deserves (when they correctly identify that a given conclusion truly is scientifically supported, anyway). For example, an attack commonly used by creationists against the theory of evolution by natural selection is that it is “only a theory”. Such an argument is only convincing to someone lacking an understanding of the degree to which a hypothesis must withstand interrogation before it starts to be cited as a “theory” in scientific circles.
It should be pretty obvious that government’s epistemic process is flawed; nonetheless, many bad or outright false ideas become “facts” after being enshrined in law or upheld by court decisions. (I could discuss this at length if needed.)
Social processing seems to do much better at spotting ethical transgressions (harm and fairness violations), but isolated social groups and communities are vulnerable to memic infection by ideas which become self-reinforcing in the absence of good communication outside the group. Such ideas tend to survive by discouraging such communication and encouraging distrust of outside ideas (e.g. by labeling those outside the community as untrustworthy or tainted in some way), perpetuating the cycle.
The mainstream media was, for many decades, the antidote to the problems in the other arenas. Independent newspapers would risk establishment disfavor in exchange for increased circulation—and although publishing politically inconvenient truth is not the only way to do that, it was certainly one of them.
Whether deliberately and conspiratorially or simply by many different interests arriving at the same solutions for their problems (and the people with the power to stop it looking the other way as the industry’s lobbyists rewrote the laws to encourage and promote those common solutions), media consolidation has effectively taken the mainstream media out of the game as a voice of dissent.
Issuepedia’s methodology
The basic idea behind Issuepedia’s informal epistemic methodology is that truth—at least on issues where there is no clear experiment which can be performed to resolve the matter—is best determined by successive approximation from an initial guess, combined with encouragement of dissenting arguments.
Someone makes a statement—a “claim”. Others respond, giving evidence and reasoning either supporting or in contradicting the claim (counterpoint). Those who still agree with the initial statement then defend it from the counterpoints with further evidence and/or reasoning. If there are any counterpoints nobody has been able to reasonably contradict, then the claim fails; otherwise it stands.
By keeping a record of the objections offered—in a publicly-editable space, via previously unavailable technology (the internet) -- it becomes unnecessary to rehash the ensuing debate-branch if someone raises the same objection again. They may add new twigs, but once an argument has been answered, the answers will be there every time the same argument is raised. This is an essential tool for defeating denialism, which I define as the repeated re-use of already-defeated but otherwise persuasive arguments; to argue with a denialist, one simply need refer to the catalogue of arguments against their position, and reuse those arguments until the denialist comes up with a new one. This puts the burden on the denialists (finally!) and takes it off those who are sincerely trying to determine the nature of reality.
This also makes it possible for large decisions involving many complex factors to be more accurately updated if knowledge of those factors changes significantly. One would never end up in a situation where one is asking “why do we do things this way?”, much less “why do we still do things this way, even though it hasn’t made sense since X happened 20 years ago?” because the chain of reasoning would be thoroughly documented.
At present, the methodology has to be implemented by hand; I am working on software to automate the process.
criticism of this methodology
[dripgrind] Your standard of verification seems to be the Wikipedia standard—if you can find a “mainstream” source saying something, then you are happy to take it as fact (provided it fits your case).
[woozle] I am “happy to take it as fact” until I find something contradictory. When that happens, I generally make note of both sources and look for more authoritative information. If you have a better methodology, I am open to suggestions. .. The “Wikipedia standard” seems to work pretty well, though—didn’t someone do a study comparing Wikipedia’s accuracy with Encyclopedia Britannica’s, and they came out about even?
[dripgrind] So your standard of accepting something as evidence is “a ‘mainstream source’ asserted it and I haven’t seen someone contradict it”. That seems like you are setting the bar quite low. Especially because we have seen that [a specific claim woozle made] was quickly debunked (or at least, contradicted, which is what prompts you to abandon your belief and look for more authoritative information) by simple googling. Maybe you should, at minimum, try googling all your beliefs and seeing if there is some contradictory information out there.
“Setting the bar quite low”: yes, the initial bar for accepting a statement is low. This is by design, based on the idea of successive approximation of truth (as outlined above) and my secondary hypothesis “that it is important for people to share their opinions on things, regardless of how much thought has been put into those opinions.” (See note 2 below.)
Certainly this methodology can lead to error if the size of the observing group is insufficiently large and active—but it only takes one person saying “wait, that’s nonsense!” to start the corrective process. I don’t see that degree of responsiveness in any of the other epistemic arenas, and I don’t believe it adds any new weaknesses—except that there is no easy/quick way to gauge the reliability of a given assertion. That is a weakness which I plan to address via the structured debate tool (although I had not until now consciously realized that it was needed).
If this explanation of the process still seems problematic, I’m quite happy to discuss it further; getting the process right is obviously critical.
I will be posting next on the specific claims we were discussing, i.e. 9/11 “conspiracy theories”. It will probably take several more days at least. Will update this post with a link when the second one is ready.
The idea here is to “call a spade a spade”: if something is morally wrong (or right), say so—rather than giving the appearance of impartiality priority over reaching sound conclusions (e.g. “he-said/she-said journalism” in the media, or the “NPOV” requirement on Wikipedia). You may start out with a lot of wrong statements, but they will be statements which someone believed firmly enough to write—and when they are refuted, everyone who believed them will have access to the refutation, doing much more towards reducing overall error than if you only recorded known truths.
2. A secondary hypothesis is that it is important for people to share their opinions on things, regardless of how much thought has been put into those opinions. I have two reasons for this: (1) it helps overcome individual bias by pooling the opinions of many (in an arena where hopefully all priors and reasoning may eventually be discussed and resolved), and (2) there are many terrible things that happen but which we lack the immediate power to change; if we neither do nor say anything about them, others may reasonably assume that we consent to these things. Saying something at least helps prevent the belief that there is no dissent, which otherwise might be used to justify the status quo.
3. I am hoping that this observation is not itself a self-delusion or some form of akrasia. Toward the end of confirming or ruling out akrasia, I have made a point of posting my positions on many topics, with an open offer to defend any of those positions against rational criticism. If anyone believes, after observing any of the debates I have been involved in, that I am refusing to change my position in response to facts which clearly indicate such a change should take place, then I will add a note to that effect under the position in question.
4. These have a lot in common with what David Brin calls “disputation arenas”, but they don’t seem to be exactly the same thing.
Overcoming the mind-killer
I’ve been asked to start a thread in order to continue a debate I started in the comments of an otherwise-unrelated post. I started to write a post on that topic, found myself introducing my work by way of explanation, and then realized that this was a sub-topic all its own which is of substantial relevance to at least one of the replies to my comments in that post—and a much better topic for a first-ever post/thread .
So I’m going to write that introductory post first, and then start another thread specifically on the topic under debate.
I run issuepedia.org, a wiki site largely dedicated to the rational analysis of politics.
As part of that analysis, it covers areas such as logical fallacies (and the larger domain of what I call “rhetorical deceptions” and which LessWrong calls “the dark arts”), history, people, organizations, and any other topics necessary to understand an issue. Coverage of each area generally includes collecting sources (such as news articles, editorials, and blog posts), essential details to provide a quick overview, and usually an attempt to draw some kind of conclusion1 about the topic’s ethical significance based, as much as possible, on the sources collected. (Readers are, of course, free to use the wiki format to offer counterarguments and additional facts/sources.)
I started Issuepedia in 2005, largely in an attempt to understand how Bush could possibly have been re-elected (am I deluded, or is half the country deluded? if the latter, how did this happen?). Most of the content is my writing, as I am better at writing than at community-building, but it is all freely copyable under a CC license. I do not receive any money for my work on the site; it does accept donations, but this fact is not heavily advertised and so far there have been no donors. It does not display advertisements, nor have I advertised it (other than linking to it in contexts where its content seems relevant, such as comments on blog entries). I am considering doing the latter at some point when I have sufficient time to give the project the focus it will need in order to grow successfully.
Rationality and Politics
My main hypothesis2 in starting Issuepedia is that it is, in fact, possible to be rational about politics, to overcome its “mind-killing” qualities—if given sufficient “thinking room” in which to record and work through all the relevant (and often mind-numbing) details involved in most political issues in a public venue where you can “show your work” and others may point out any errors and omissions. I’m trying to use wiki technology as an intelligence-enhancing, bias-overcoming device.
Politics contains issues within issues within issues. Arriving at a rational conclusion about any given issue will often depend on being able to draw reasonable conclusions about a handful of other issues, each of which may have other sub-issues affecting it, and so on.
Keeping track of all of these dependencies, however, is somewhat beyond most people’s native emotional intuition and working memory capacity (including mine). Even when we consciously try to overcome built-in biases (such as allegiance to our perceived “tribes”, unexamined beliefs acquired in childhood, etc.), our hind-brains want to take the fine, complex grain of reality and turn it into a simple good-vs.-bad or us-vs.-them moral map drawn with a blunt magic marker—something we can easily remember and apply.
On the other hand, many issues really do seem to boil down to such a simple narrative, something best stated in quite stark terms. Individuals who are making an effort to be measured and rational often seem to reject out of hand the possibility that such simple, clearcut conclusions could possibly be valid, leading to the opposite bias—a sort of systemic “fallacy of moderation”. This can cause popular acquiescence to beliefs that are essentially wrong, such as the claim that “the Democrats do it too” when pointing out evils committed by the latest generation of Republicans. (Yes, they do it too—but much less often, and much less egregiously overall.)
I propose that there must exist some set of factual information upon which each question ultimately rests, if followed far enough “down”. In other words, if you exhaustively and recursively map out the sub-issues for each issue, you must eventually arrive at an issue which can be resolved by reference to facts known or knowable. If no such point can be reached, then the issue cannot possibly have any real-world significance—because if anyone is in any way affected by the issue, then there is the fact of that dependency which must somehow tie in; the trick is figuring out the actual nature of that dependency.
My approach in issuepedia is to break each major issue down into sub-issues, each of which has its own page for collecting information and analysis on that particular issue, then do the same to each of those issues until each sub-branch (or “rootlet”, if you prefer to stay in-metaphor) has encountered the “bedrock” of questions which can be determined factually. Once the “bedrock” questions have been answered, the issues which rest upon those questions can be resolved, and so on.
Documenting these connections, and the facts upon which they ultimately rest, ideally allows each reader to reconstruct the line of reasoning behind a given conclusion. If they disagree with that conclusion, then the facts and reasoning are available for them to figure out where the error lies—and the wiki format makes it easy for them to post corrections; eventually, all rational parties should be able to reach agreement.
I won’t go so far as to claim that Issuepedia carries out this methodology with any degree of rigor, but it’s what I’m working towards.
I’m also aware that recent studies have shown that many people aren’t influenced by facts once they’ve made up their minds (e.g. here). Since I have many times observed myself change my own opinion3 in response to facts, I am working with the hypothesis that this ability may be a cognitive attribute that some people have and others lack—in much the same way that (apparently) only 32% of the adult population can reason abstractly. If it turns out that I do not, in fact, possess this ability to a satisfactory degree, then finding some way to improve it will become a priority.
Methodology for Determination of Fact
The question of how to methodically go about determining fact—i.e. which assertions may be provisionally treated as true and which should be subjected to further inquiry—came up in the earlier discussion, and is something which I think is ripe for formalization.
flaws in the existing methodologies
Up until now, society has depended upon a sort of organic, slow and inefficient but reasonably thorough vetting of new ideas by a number of processes. Those who are more familiar with this area of study should feel free to note any errors or omissions in my understanding, but here is my list of processes (which I’ll call “epistemic arenas”4) by which we have traditionally arrived at societal truths:
science (the scientific process—the scientific method for performing and documenting experiments, peer-review in publications, the practice of replicating previous experiments, and probably other practices generally considered to be part of “science”)
government: the court system, the legislative sausage factory
social processing: people (especially friends) discussing their views on the ethics of various items large and small
media: newspapers, radio, TV
The flaws in each of these methodologies have become much clearer due to the ease and speed with which they may now be researched because of the Internet. A brief summary:
The scientific process is clearly the best of the lot, but can be gamed and exploited: fake papers with sciencey-looking graphs and formulas (e.g. this) -- sometimes published in fake journals with sciencey-sounding names (e.g. JP&S) or backed by sciencey-sounding institutions (SEPP, SSRC, SPPI, OISM) -- in order to promote ideas which have been soundly defeated by the real scientific process. Lists of hundreds of scientists dissenting from the prevailing view may not, in fact, contain any scientists actually qualified to make an authoritative statement (i.e. one deserving credence without having to hear the actual argument) on the subject, and only gain popular credibility because of the use of the word “scientist”.
On the other hand, legitimate ideas which for some reason are considered taboo sometimes cannot gain entry to this process, and must publish their findings by other means which can look very similar to the methods used to promote illegitimate ideas. How can we tell the difference? We can, but it takes time—thus “a lie can travel around the world while the truth is still putting on its boots” by exploiting these weaknesses.
Bringing the machinery of the scientific process to bear on any given issue is also quite expensive and time-consuming; it can be many years (or decades, in the case of significant new ideas) before enough evidence is gathered to overturn prior assumptions. This fact can be exploited in both directions: important but “inconvenient” new facts can be drowned in a sea of publicity arguing against them, and well-established facts can be taken out politically by denialist “sniping” (repeating well-refuted claims over and over again until more people are familiar with those claims than with the refutations thereof, leading to popular belief that the claims must be true).
Also, because the public is generally unaware of how the scientific process functions, they are likely to give it far less weight than it deserves (when they correctly identify that a given conclusion truly is scientifically supported, anyway). For example, an attack commonly used by creationists against the theory of evolution by natural selection is that it is “only a theory”. Such an argument is only convincing to someone lacking an understanding of the degree to which a hypothesis must withstand interrogation before it starts to be cited as a “theory” in scientific circles.
It should be pretty obvious that government’s epistemic process is flawed; nonetheless, many bad or outright false ideas become “facts” after being enshrined in law or upheld by court decisions. (I could discuss this at length if needed.)
Social processing seems to do much better at spotting ethical transgressions (harm and fairness violations), but isolated social groups and communities are vulnerable to memic infection by ideas which become self-reinforcing in the absence of good communication outside the group. Such ideas tend to survive by discouraging such communication and encouraging distrust of outside ideas (e.g. by labeling those outside the community as untrustworthy or tainted in some way), perpetuating the cycle.
The mainstream media was, for many decades, the antidote to the problems in the other arenas. Independent newspapers would risk establishment disfavor in exchange for increased circulation—and although publishing politically inconvenient truth is not the only way to do that, it was certainly one of them.
Whether deliberately and conspiratorially or simply by many different interests arriving at the same solutions for their problems (and the people with the power to stop it looking the other way as the industry’s lobbyists rewrote the laws to encourage and promote those common solutions), media consolidation has effectively taken the mainstream media out of the game as a voice of dissent.
Issuepedia’s methodology
The basic idea behind Issuepedia’s informal epistemic methodology is that truth—at least on issues where there is no clear experiment which can be performed to resolve the matter—is best determined by successive approximation from an initial guess, combined with encouragement of dissenting arguments.
Someone makes a statement—a “claim”. Others respond, giving evidence and reasoning either supporting or in contradicting the claim (counterpoint). Those who still agree with the initial statement then defend it from the counterpoints with further evidence and/or reasoning. If there are any counterpoints nobody has been able to reasonably contradict, then the claim fails; otherwise it stands.
By keeping a record of the objections offered—in a publicly-editable space, via previously unavailable technology (the internet) -- it becomes unnecessary to rehash the ensuing debate-branch if someone raises the same objection again. They may add new twigs, but once an argument has been answered, the answers will be there every time the same argument is raised. This is an essential tool for defeating denialism, which I define as the repeated re-use of already-defeated but otherwise persuasive arguments; to argue with a denialist, one simply need refer to the catalogue of arguments against their position, and reuse those arguments until the denialist comes up with a new one. This puts the burden on the denialists (finally!) and takes it off those who are sincerely trying to determine the nature of reality.
This also makes it possible for large decisions involving many complex factors to be more accurately updated if knowledge of those factors changes significantly. One would never end up in a situation where one is asking “why do we do things this way?”, much less “why do we still do things this way, even though it hasn’t made sense since X happened 20 years ago?” because the chain of reasoning would be thoroughly documented.
At present, the methodology has to be implemented by hand; I am working on software to automate the process.
criticism of this methodology
“Setting the bar quite low”: yes, the initial bar for accepting a statement is low. This is by design, based on the idea of successive approximation of truth (as outlined above) and my secondary hypothesis “that it is important for people to share their opinions on things, regardless of how much thought has been put into those opinions.” (See note 2 below.)
Certainly this methodology can lead to error if the size of the observing group is insufficiently large and active—but it only takes one person saying “wait, that’s nonsense!” to start the corrective process. I don’t see that degree of responsiveness in any of the other epistemic arenas, and I don’t believe it adds any new weaknesses—except that there is no easy/quick way to gauge the reliability of a given assertion. That is a weakness which I plan to address via the structured debate tool (although I had not until now consciously realized that it was needed).
If this explanation of the process still seems problematic, I’m quite happy to discuss it further; getting the process right is obviously critical.
I will be posting next on the specific claims we were discussing, i.e. 9/11 “conspiracy theories”. It will probably take several more days at least. Will update this post with a link when the second one is ready.
Notes
1. For example, the article about Intelligent Design concludes that “As with creationism in its other forms, ID’s main purpose was (and remains) to insinuate religion into public school education in the United States. It has no real arguments to offer, its support derives exclusively from Christian ideological protectionism and evangelism, and its proponents have no interest in revising their own beliefs in the light of evidence new to them. It is a form of denialism.”
The idea here is to “call a spade a spade”: if something is morally wrong (or right), say so—rather than giving the appearance of impartiality priority over reaching sound conclusions (e.g. “he-said/she-said journalism” in the media, or the “NPOV” requirement on Wikipedia). You may start out with a lot of wrong statements, but they will be statements which someone believed firmly enough to write—and when they are refuted, everyone who believed them will have access to the refutation, doing much more towards reducing overall error than if you only recorded known truths.
2. A secondary hypothesis is that it is important for people to share their opinions on things, regardless of how much thought has been put into those opinions. I have two reasons for this: (1) it helps overcome individual bias by pooling the opinions of many (in an arena where hopefully all priors and reasoning may eventually be discussed and resolved), and (2) there are many terrible things that happen but which we lack the immediate power to change; if we neither do nor say anything about them, others may reasonably assume that we consent to these things. Saying something at least helps prevent the belief that there is no dissent, which otherwise might be used to justify the status quo.
3. I am hoping that this observation is not itself a self-delusion or some form of akrasia. Toward the end of confirming or ruling out akrasia, I have made a point of posting my positions on many topics, with an open offer to defend any of those positions against rational criticism. If anyone believes, after observing any of the debates I have been involved in, that I am refusing to change my position in response to facts which clearly indicate such a change should take place, then I will add a note to that effect under the position in question.
4. These have a lot in common with what David Brin calls “disputation arenas”, but they don’t seem to be exactly the same thing.