Better to be testably wrong than to generate nontestable wrongness
Abstract: Test the world-models [at least somewhat] scientifically by giving others and yourself opportunity to generate straightforwardly and immediately testable factual predictions from the world-model. Read up facts to make sure you are not wrong before posting, not only to persuade.
I have this theory: there are people with political opinion of some kind, who generate their world-beliefs from that opinion. This is a wrong world-model. It doesn’t work for fact finding. It works for tribal affiliations. I think it is fair to say we all been guilty of this on at least several occasions, and that all of us do it for at least some problem domains. Now, suppose you have some logical argument that contradicts other people’s world-model, starting from very basic facts. And you are writing an article.
If you source those basic facts, there’s what happens: the facts are read and accepted, the reasoning is read, the conclusion is reached, the contradiction with political opinion gets noted, the political opinion does NOT get adjusted, the politically motivated world-model generates a fault inside your argument, you get entirely counter productive and extremely irritating debate about semantics or argumentation techniques. In the end, not a yota changes about the world model of anyone involved in the debate.
If you don’t source those basic facts, there’s what happens: the facts are read and provisionally accepted, the reasoning is read, the conclusion is reached, the contradiction with political opinion gets noted, the political opinion does not get adjusted, the politically motivated world model generates wrong fact expectations about basic, easily testable facts. The contradiction eventually gets noted, the wrong world-model gets a minor slap on the nose, and actually does decrease in it’s weight ever so slightly for generating wrong expectations. The person is, out of necessity, doing some actual science here—generating testable hypotheses from their theory, about the facts they don’t know, having them tested (and shown wrong, providing feedback in somewhat scientific manner).
Unfortunately, any alterations to world model are uncomfortable—the world models, as memes, have a form of self preservation—so nobody likes this, and the faulty world-models produce considerable pressure to demand of you to source the basic knowledge upfront, so that the world-model can know where it can safely generate non-testable faults.
Other giant positive effect (for the society) happens when you are wrong, and you are the one who has been generating facts from world-model. Someone looks up facts, and then blam, your wrong world-model gets a slap on the nose.
Unfortunately that mechanism, too, makes you even more eager to provide and cut-n-paste citations for your basic facts, rather than state the facts as you interpret them (which is far more revealing of your argument structure, forwards facts to conclusion vs backwards conclusion to facts).
One big drawback is that it is annoying for those who do not actually have screwed up world-models, and just want to know the truth. These folks have to look up if assertions are correct. But it is not such a big drawback, as them looking up the sources themselves eliminates effects of your cherrypicking.
Another drawback is that it results in generation of content that can look like it has lower quality. In terms of marketing value, it is a worse product—it might slap your world model on the nose. It just doesn’t sell well. But we aren’t writing for sale, are we?
Other thing to keep in mind is that the citations let separate hypotheses from facts, and that is very useful. It would be great to do so in alternative way for basic knowledge. By marking the hypotheses with “i think” and facts with strong assertions like “it is a fact that”. Unfortunately that can make you look very foolish—that fool is sticking his neck out into guillotine of testable statements!. Few have the guts to do that, and many of the few that do, may well not be the most intelligent.
And of course it only works tolerably well when we are certain enough that incorrect factual assertions will quickly be challenged. Fortunately, that is usually the case on the internet. Otherwise, people can slip in the incorrect assertions.
Ahh, and also: try not to use the above to rationalize not looking up the sources because it’s a chore.
edit: changed to much better title. edit: realized that italic is a poor choice for the summary, which needs to be most readable.
My summary of your post:
My critique:
People who don’t like an argument for political reasons will attack its facts and reasoning whenever they see an opportunity to do so—there is no XOR function here. If you are debating politics with barely rational people, you will get into a semantic slugfest regardless of how well-cited your facts are.
I think you overestimate how likely people are to look up un-cited facts. Even if people are twice as likely to modify their opinions when they look up the damning facts themselves as when they read the damning facts in citations provided by an opponent, people are probably ten times as likely to read a citation as they are to go do independent research. Citations may be less effective for each person who reads them, but they’ll be read by far more people, and I think the latter effect is stronger.
Even if your premises are correct, a better response would be to take care to strengthen your reasoning and make your assumptions and definitions explicit. Rather than spend effort weakening the apparent strength of the facts, spend effort enhancing the apparent strength of the syllogism. It’s impossible to eliminate literally all logic/semantics based challenges, but it’s quite doable to write a short essay that preempts all but the silliest semantic challenges, and (assuming you have an audience) people will realize fairly quickly that your lone heckler is trying to show off his rhetorical skills rather than trying to make an important point.
I never claimed there was, and its entirely irrelevant to the point. I only claimed that they will attack the facts (also, the facts are first on the list and often people stop right there because laziness edit: that’s it, not xor but ‘or else’ aka the garden variety or that stops if first part is true). After they attacked the facts, if the facts are presented, there is a minor decrease to the weight assigned to the world model that has led them to attack the facts. No big effects are claimed. Just usually there is a zero decrease, and that can be significant difference.
Strengthening of reasoning is most necessary when you form the opinion. Very often by time people express their opinion, all they are strengthening is the justifications for already formed opinion, what ever that opinion might be.
I think I get the point you’re trying to make–that making people do more of the work of thinking on their own (forming ideology-based opinions on unsourced facts being true or untrue, and looking them up themselves) makes it more likely that they will change those ideologically-motivated beliefs if it turns out they were wrong about the facts.
I agree that the more time people spend thinking about a topic, the more likely they are to change their mind. That’s what curiosity is. However, I don’t know if your specific strategy (presenting controversial unsourced facts instead of citing the sources in your articles) would actually work. What is your evidence that incorrect factual assertions will quickly be challenged on the internet?
Also, people whose beliefs are ideologically motivated are not likely to be curious. Deciding that a controversial fact conflicts with their beliefs and thus must be untrue won’t necessarily lead to them looking it up and then updating...it seems, based on my experience with this kind of person, that they would be more likely to ignore the facts because “the author didn’t cite his sources, therefore is poorly educated/low-status/stupid, therefore I don’t have to listen to him.” On the contrary, an easy-going person with no particular opinion on a topic (like I’ve been guilty of in the past) might simply absorb the facts as written without bothering to look them up, since they don’t conflict with any other beliefs and therefore aren’t very interesting.
In my experience, most arguers bite the bullet, and actually do make assertions that facts are false. Political ones especially so. They see ‘dangerous enemy propaganda’, they go on to counter regardless of how curious they are, first thing to attack is the facts. Then their political motivation effectively slaps them on their nose. Instead of rewarding them with feeling of dopamine rush for having cleverly countered a complex argument.
Furthermore, this is not for complex stuff. This is for basic domain specific knowledge.
Regarding incorrect factual knowledge, well the correct factual knowledge sure always gets challenged by someone if there’s no source attached, so why incorrect wouldn’t? Everyone loves to be right. It’s easy to be right about facts. (of course excluding edge cases, i.e. highly biased audiences)
Furthermore, it is poor form to learn domain specific knowledge from someone’s argument, cited or not. One should use a textbook. Compiling bits from many sources needs to be done carefully, and that’s what a good textbook does. Learning bits of info about complex topics out of context just for sake of processing an argument, is an approach that leads to much misunderstandings of the basics.
I’ll take your word for it. Maybe we argue with different sorts of people. I’m not really an arguing kind of person either, especially not on political subjects (blah blah blah boring...), so we may well have just had different experiences.
I see no reason why this should be universally true. True most of the time in many kinds of Internet forums, maybe, but that’s not as strong an assertion as saying it’s always true.
Agreed. But we’re not talking about the ideal, perfect knowledge-acquiring human being here, we’re talking about people as they are. (Or I assume we are, anyway, since ideologically-motivated, non-curious individuals are not exactly models of rationality either.) I consider myself an unusually widely-read person, and still in terms of domains that I don’t care much about or don’t find interesting, lots of ‘facts’ come from other people who do care about those domains bringing them up in conversation. This isn’t a good thing, or any kind of model to hold up. It’s just true about me, and probably about lots of other people. Of course it leads to misunderstanding of the basics, and that sucks, but that doesn’t make it not true...and refusing to cite your sources in essays won’t do anything about it either.
So, wait. You mean, it’s better to make stuff up that you’d expect based on your worldview, because then when someone calls you on it you could change your mind?
If that’s not what you mean, this could use a rewrite.
because then you get calibrated properly, and because then it is far easier for others to tell that you are rationalizing, as the facts are far cheaper to check. I don’t propose you make up what you’d expect, it just naturally tends to happen for most people, and it’d better be producing detectable wrongness. Better for “us”, of course you’d feel stupid.
edit: how can you even interpret it this way? “Read up facts to make sure you are not wrong before posting, not only to persuade.” is in bloody abstract . You write what you think is true, you read up the facts to make sure you aren’t wrong, if you are off, your expertise is too bad or your cognition too motivated, and you are likely to be incorrect.
This is where I got that notion.
Yeah, it was sloppy of me to phrase it that way. Sorry about that.
What I see happening instead, is that people have the conclusion that they didn’t make from facts, then they go on fishing for facts to support the conclusion, and if what they believed doesn’t actually follow from facts, they introduce any errors into structure of the argument, where they can be denied all day long. No. You write down why you believed it in the first place, then your errors will be in the facts, then ideally you check if you actually mis remembered the facts, if you did, that makes it likely you believed it wrong in the first place.
The few times I ever seen people change their view online, was when they quickly dumped why they believed something, and then were shown that they believe it for wrong factual knowledge. (excluding few special cases with math where one can demonstrate errors).
And note: I am speaking of elementary domain specific knowledge here. The one that anyone with any expertise about the topic would have. You still aren’t sourcing majority of assumptions you are making about this knowledge, just the assumptions are implicit not explicit, if you can’t get a few explicit ones right from memory then you’re no expert and must study the topic properly before making opinions.
The big question is, is it better to be testably wrong or untestably right? :-)
More seriously: on what objective or documented evidence do you base your theory? If it’s just your general feeling based on personal experience (but you haven’t exhaustively documented all relevant personal experience during your life) then I believe you know that human biases make this evidence practically negligible.
Ok, i have some hypothesis about the rating here. Things that people aren’t sure what to think about, sit for a long time at 0. Then if they go below or above, it works as a slippery slope as the lower rating influences the reading comprehension negatively. Not sure if that’s intended effect.
More common hypothesis: People just don’t like the post. Differences in downvote rate over time can be explained by time zone considerations.
I can test my hypothesis, you know. With a script that will randomly be first to upvote and downvote comments at random. Willing to make any bet that the final score (say, five days after) won’t be affected by more than 1 vote point?
[I probably shouldn’t discuss the experiment here, but i kind of doubt you guys can precisely neutralize that kind of bias other than by hiding the vote from yourself before voting. You’ll either strongly under-compensate or overcompensate]
See also.
Yea, I know. No idea what % of the population has this enabled though, i think not big. Ratings are good for discouraging the trolling, but people end up caring too much about the ratings.
Yes. (I’d also support banning you for botting. “Experiments” are an insufficient excuse.)
I’m pretty sure I can get that experiment approved. Let’s make opinions about something testable, to calibrate, then test. Write down how likely you find that effect is greater than 1 (i.e. that after i restore the vote in 5 days, the score is [un]correlated with bot action) edit: btw we need this for proper prior for Bayesian reasoning anyway.
edit: experiment specification: a randomly chosen recently posted comment or post is up or down voted by 1 voting point. In 5 days, it is un-voted. The average score of the comments that were up voted by 1 point is compared to average score of comments that were down voted by 1 point; if there is self reinforcing effect in the scoring, the correlation should be positive; if the community instead ‘attempts to give fair score’ by steering the rating to the value deemed fair, the correlation should be negative. The priors for positive, neutral, and negative correlation are written before the test. The test is conducted in a random week (edit: not sure how many data points you can get out of a week worth of comments tho, may require longer or shorter time).
I’m quite curious myself as of what the outcome would be. I do expect positive correlation, based on well known well studied phenomena of ‘priming’, but my confidence is not very high. And of course I am not going to do it without approval, as that would be unethical.
You would need a great deal of results to get this accurate to within one karma point, I would think. And since your theorem is about a post, you shouldn’t mix comments with posts. The voting patters on the two are far different. So this would take awhile. Not that that’s a huge problem. I support the idea.
I think both comments and posts should be evaluated (separately), but i agree that voting patterns are very different.
Regarding how long it’d take, that depends to strength of the effect… what i think is the strongest effect, is that anything negative gets read much more critically—where a + voted post’s assertions will get read and seen in positive light if at all plausible, negative-voted assertions are likely to be immediately challenged (do they compel me to believe style) - it should be general reflex, that’s just being a good Bayesian reasoner, but it leads to circular reasoning problems when everyone’s reasoning this way together.
If I were to have a theory that you guys tend to apply bayesian reasoning in practice when reading posts, the vote spiral would follow as a testable hypothesis. It’s just how that stuff works in networks. Bayesian reasoning requires tracking where the data is originating from.
I think halo effects are really to blame here—if I see something downvoted, I’m far more likely to read it, because it’s more of an exception to the norm. If it’s bad, I may downvote it further. I’m sure this is the case for many.
This is the primary reason I read this post. But I did not downvote this.
I’m actually finding this hypothesis more interesting than the one in your OP (partly because it looks more testable, funnily enough). Bash out a script to watch LW and vote on things as they appear, leave it to generate data as long as one likes, then hey presto. Tiny bit tempted to do it myself, approval or not.
The sample size you need to detect an effect depends on that effect’s size. So far, so obvious, so I did a quick & dirty power analysis to get some numbers, although for posts in the discussion section rather than comments. (Posts on main are too infrequent, and I’d expect a smaller effect for comments, so comments would need a bigger sample.) If anyone cares I can throw up my code.
If my numbers are right and you took a sample of 100 upvoted posts and 100 downvoted discussion posts, the bootstrap confidence interval for the effect size would be 3.7-6.8 points wide. Even with a sample of 400 upvoted posts and 400 downvoted (and that’s 3-4 months’ worth of discussion posts), it’d be 2.2-3.0 points wide. So unless the priming effect’s strong (at least 2-4 points) a week of data wouldn’t be conclusive, at least not for posts. Comments might be more doable, though.
Yea, that’ll take a while. We’ll see about testing. The proposed effect can be strong if each next comment is affected by the previous, so that the initial disturbance does not ‘dissolve’ in a larger number. But i kind of doubt. I don’t really care whole ton for votes, i generally take them as a measure of clarity of the point, but any priming most definitely would result in their lower usefulness as a gauge of clarity. Also theres apparently voting via recent comments thread; tbh i nearly forgot you can read comments expanded, as it does seem not to be very interesting due to majority of comments being brief and meaningless outside context.
I’m becoming increasingly tempted to submit automation detection scripts to the lesswrong codebase.